Test Report: KVM_Linux_crio 20107

                    
                      8d7d309004e1c5aed2c11e9a2f72e102a81e4e45:2024-12-16:37505
                    
                

Test fail (11/327)

x
+
TestAddons/parallel/Ingress (152.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-020871 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-020871 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-020871 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4a92e7de-8018-453e-9698-b51f8a038f3a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4a92e7de-8018-453e-9698-b51f8a038f3a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005617502s
I1216 10:34:47.936755  217519 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-020871 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.754848218s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-020871 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.206
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-020871 -n addons-020871
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 logs -n 25: (1.148461709s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:31 UTC |
	| delete  | -p download-only-270974                                                                     | download-only-270974 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:31 UTC |
	| delete  | -p download-only-893315                                                                     | download-only-893315 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:31 UTC |
	| delete  | -p download-only-270974                                                                     | download-only-270974 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:31 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-453115 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC |                     |
	|         | binary-mirror-453115                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40829                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-453115                                                                     | binary-mirror-453115 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:31 UTC |
	| addons  | disable dashboard -p                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC |                     |
	|         | addons-020871                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC |                     |
	|         | addons-020871                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-020871 --wait=true                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:33 UTC | 16 Dec 24 10:33 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:33 UTC | 16 Dec 24 10:34 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | -p addons-020871                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-020871 addons                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-020871 ip                                                                            | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-020871 addons                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-020871 ssh curl -s                                                                   | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-020871 addons                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-020871 ssh cat                                                                       | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | /opt/local-path-provisioner/pvc-2768e9dc-d30c-44a0-aa98-3d81d07df32d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-020871 addons                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-020871 addons                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-020871 ip                                                                            | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:36 UTC | 16 Dec 24 10:36 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 10:31:37
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 10:31:37.831683  218168 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:31:37.831820  218168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:31:37.831830  218168 out.go:358] Setting ErrFile to fd 2...
	I1216 10:31:37.831834  218168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:31:37.832004  218168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 10:31:37.832670  218168 out.go:352] Setting JSON to false
	I1216 10:31:37.833631  218168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8045,"bootTime":1734337053,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:31:37.833745  218168 start.go:139] virtualization: kvm guest
	I1216 10:31:37.836024  218168 out.go:177] * [addons-020871] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 10:31:37.837721  218168 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 10:31:37.837716  218168 notify.go:220] Checking for updates...
	I1216 10:31:37.839534  218168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:31:37.840926  218168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 10:31:37.842333  218168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 10:31:37.843678  218168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 10:31:37.845224  218168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 10:31:37.846706  218168 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:31:37.882179  218168 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 10:31:37.883825  218168 start.go:297] selected driver: kvm2
	I1216 10:31:37.883850  218168 start.go:901] validating driver "kvm2" against <nil>
	I1216 10:31:37.883867  218168 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 10:31:37.884733  218168 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 10:31:37.884859  218168 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 10:31:37.902062  218168 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 10:31:37.902125  218168 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 10:31:37.902406  218168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 10:31:37.902445  218168 cni.go:84] Creating CNI manager for ""
	I1216 10:31:37.902474  218168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 10:31:37.902483  218168 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 10:31:37.902531  218168 start.go:340] cluster config:
	{Name:addons-020871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-020871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:31:37.902630  218168 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 10:31:37.904551  218168 out.go:177] * Starting "addons-020871" primary control-plane node in "addons-020871" cluster
	I1216 10:31:37.905745  218168 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:31:37.905798  218168 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1216 10:31:37.905812  218168 cache.go:56] Caching tarball of preloaded images
	I1216 10:31:37.905894  218168 preload.go:172] Found /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 10:31:37.905907  218168 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1216 10:31:37.906201  218168 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/config.json ...
	I1216 10:31:37.906231  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/config.json: {Name:mk776e5f2bcf43e15d10ef296a4be30c7dd13575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:31:37.906412  218168 start.go:360] acquireMachinesLock for addons-020871: {Name:mk416695f38f04563a2ca155c5383ee4d8f0f97b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 10:31:37.906481  218168 start.go:364] duration metric: took 49.257µs to acquireMachinesLock for "addons-020871"
	I1216 10:31:37.906510  218168 start.go:93] Provisioning new machine with config: &{Name:addons-020871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-020871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 10:31:37.906587  218168 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 10:31:37.908284  218168 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1216 10:31:37.908464  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:31:37.908525  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:31:37.924184  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I1216 10:31:37.924744  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:31:37.925341  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:31:37.925366  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:31:37.925731  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:31:37.925904  218168 main.go:141] libmachine: (addons-020871) Calling .GetMachineName
	I1216 10:31:37.926061  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:31:37.926231  218168 start.go:159] libmachine.API.Create for "addons-020871" (driver="kvm2")
	I1216 10:31:37.926280  218168 client.go:168] LocalClient.Create starting
	I1216 10:31:37.926326  218168 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem
	I1216 10:31:38.004936  218168 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem
	I1216 10:31:38.084095  218168 main.go:141] libmachine: Running pre-create checks...
	I1216 10:31:38.084121  218168 main.go:141] libmachine: (addons-020871) Calling .PreCreateCheck
	I1216 10:31:38.084724  218168 main.go:141] libmachine: (addons-020871) Calling .GetConfigRaw
	I1216 10:31:38.085284  218168 main.go:141] libmachine: Creating machine...
	I1216 10:31:38.085302  218168 main.go:141] libmachine: (addons-020871) Calling .Create
	I1216 10:31:38.085536  218168 main.go:141] libmachine: (addons-020871) creating KVM machine...
	I1216 10:31:38.085571  218168 main.go:141] libmachine: (addons-020871) creating network...
	I1216 10:31:38.086739  218168 main.go:141] libmachine: (addons-020871) DBG | found existing default KVM network
	I1216 10:31:38.087580  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:38.087426  218191 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015cc0}
	I1216 10:31:38.087635  218168 main.go:141] libmachine: (addons-020871) DBG | created network xml: 
	I1216 10:31:38.087659  218168 main.go:141] libmachine: (addons-020871) DBG | <network>
	I1216 10:31:38.087672  218168 main.go:141] libmachine: (addons-020871) DBG |   <name>mk-addons-020871</name>
	I1216 10:31:38.087690  218168 main.go:141] libmachine: (addons-020871) DBG |   <dns enable='no'/>
	I1216 10:31:38.087703  218168 main.go:141] libmachine: (addons-020871) DBG |   
	I1216 10:31:38.087718  218168 main.go:141] libmachine: (addons-020871) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1216 10:31:38.087772  218168 main.go:141] libmachine: (addons-020871) DBG |     <dhcp>
	I1216 10:31:38.087812  218168 main.go:141] libmachine: (addons-020871) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1216 10:31:38.087862  218168 main.go:141] libmachine: (addons-020871) DBG |     </dhcp>
	I1216 10:31:38.087892  218168 main.go:141] libmachine: (addons-020871) DBG |   </ip>
	I1216 10:31:38.087900  218168 main.go:141] libmachine: (addons-020871) DBG |   
	I1216 10:31:38.087913  218168 main.go:141] libmachine: (addons-020871) DBG | </network>
	I1216 10:31:38.087931  218168 main.go:141] libmachine: (addons-020871) DBG | 
	I1216 10:31:38.093660  218168 main.go:141] libmachine: (addons-020871) DBG | trying to create private KVM network mk-addons-020871 192.168.39.0/24...
	I1216 10:31:38.163864  218168 main.go:141] libmachine: (addons-020871) setting up store path in /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871 ...
	I1216 10:31:38.163907  218168 main.go:141] libmachine: (addons-020871) building disk image from file:///home/jenkins/minikube-integration/20107-210204/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1216 10:31:38.163921  218168 main.go:141] libmachine: (addons-020871) DBG | private KVM network mk-addons-020871 192.168.39.0/24 created
	I1216 10:31:38.163986  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:38.163773  218191 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 10:31:38.164015  218168 main.go:141] libmachine: (addons-020871) Downloading /home/jenkins/minikube-integration/20107-210204/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20107-210204/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1216 10:31:38.438084  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:38.437934  218191 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa...
	I1216 10:31:38.487252  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:38.487072  218191 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/addons-020871.rawdisk...
	I1216 10:31:38.487301  218168 main.go:141] libmachine: (addons-020871) DBG | Writing magic tar header
	I1216 10:31:38.487316  218168 main.go:141] libmachine: (addons-020871) DBG | Writing SSH key tar header
	I1216 10:31:38.487327  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:38.487213  218191 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871 ...
	I1216 10:31:38.487344  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871
	I1216 10:31:38.487412  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube/machines
	I1216 10:31:38.487439  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871 (perms=drwx------)
	I1216 10:31:38.487452  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 10:31:38.487483  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube/machines (perms=drwxr-xr-x)
	I1216 10:31:38.487503  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204
	I1216 10:31:38.487511  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube (perms=drwxr-xr-x)
	I1216 10:31:38.487523  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins/minikube-integration/20107-210204 (perms=drwxrwxr-x)
	I1216 10:31:38.487532  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1216 10:31:38.487538  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 10:31:38.487549  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 10:31:38.487556  218168 main.go:141] libmachine: (addons-020871) creating domain...
	I1216 10:31:38.487566  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins
	I1216 10:31:38.487571  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home
	I1216 10:31:38.487581  218168 main.go:141] libmachine: (addons-020871) DBG | skipping /home - not owner
	I1216 10:31:38.488662  218168 main.go:141] libmachine: (addons-020871) define libvirt domain using xml: 
	I1216 10:31:38.488691  218168 main.go:141] libmachine: (addons-020871) <domain type='kvm'>
	I1216 10:31:38.488702  218168 main.go:141] libmachine: (addons-020871)   <name>addons-020871</name>
	I1216 10:31:38.488709  218168 main.go:141] libmachine: (addons-020871)   <memory unit='MiB'>4000</memory>
	I1216 10:31:38.488716  218168 main.go:141] libmachine: (addons-020871)   <vcpu>2</vcpu>
	I1216 10:31:38.488722  218168 main.go:141] libmachine: (addons-020871)   <features>
	I1216 10:31:38.488733  218168 main.go:141] libmachine: (addons-020871)     <acpi/>
	I1216 10:31:38.488739  218168 main.go:141] libmachine: (addons-020871)     <apic/>
	I1216 10:31:38.488745  218168 main.go:141] libmachine: (addons-020871)     <pae/>
	I1216 10:31:38.488756  218168 main.go:141] libmachine: (addons-020871)     
	I1216 10:31:38.488767  218168 main.go:141] libmachine: (addons-020871)   </features>
	I1216 10:31:38.488777  218168 main.go:141] libmachine: (addons-020871)   <cpu mode='host-passthrough'>
	I1216 10:31:38.488786  218168 main.go:141] libmachine: (addons-020871)   
	I1216 10:31:38.488795  218168 main.go:141] libmachine: (addons-020871)   </cpu>
	I1216 10:31:38.488805  218168 main.go:141] libmachine: (addons-020871)   <os>
	I1216 10:31:38.488814  218168 main.go:141] libmachine: (addons-020871)     <type>hvm</type>
	I1216 10:31:38.488825  218168 main.go:141] libmachine: (addons-020871)     <boot dev='cdrom'/>
	I1216 10:31:38.488839  218168 main.go:141] libmachine: (addons-020871)     <boot dev='hd'/>
	I1216 10:31:38.488849  218168 main.go:141] libmachine: (addons-020871)     <bootmenu enable='no'/>
	I1216 10:31:38.488856  218168 main.go:141] libmachine: (addons-020871)   </os>
	I1216 10:31:38.488861  218168 main.go:141] libmachine: (addons-020871)   <devices>
	I1216 10:31:38.488868  218168 main.go:141] libmachine: (addons-020871)     <disk type='file' device='cdrom'>
	I1216 10:31:38.488877  218168 main.go:141] libmachine: (addons-020871)       <source file='/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/boot2docker.iso'/>
	I1216 10:31:38.488884  218168 main.go:141] libmachine: (addons-020871)       <target dev='hdc' bus='scsi'/>
	I1216 10:31:38.488889  218168 main.go:141] libmachine: (addons-020871)       <readonly/>
	I1216 10:31:38.488895  218168 main.go:141] libmachine: (addons-020871)     </disk>
	I1216 10:31:38.488926  218168 main.go:141] libmachine: (addons-020871)     <disk type='file' device='disk'>
	I1216 10:31:38.488944  218168 main.go:141] libmachine: (addons-020871)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1216 10:31:38.488974  218168 main.go:141] libmachine: (addons-020871)       <source file='/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/addons-020871.rawdisk'/>
	I1216 10:31:38.488983  218168 main.go:141] libmachine: (addons-020871)       <target dev='hda' bus='virtio'/>
	I1216 10:31:38.488992  218168 main.go:141] libmachine: (addons-020871)     </disk>
	I1216 10:31:38.489004  218168 main.go:141] libmachine: (addons-020871)     <interface type='network'>
	I1216 10:31:38.489042  218168 main.go:141] libmachine: (addons-020871)       <source network='mk-addons-020871'/>
	I1216 10:31:38.489068  218168 main.go:141] libmachine: (addons-020871)       <model type='virtio'/>
	I1216 10:31:38.489079  218168 main.go:141] libmachine: (addons-020871)     </interface>
	I1216 10:31:38.489090  218168 main.go:141] libmachine: (addons-020871)     <interface type='network'>
	I1216 10:31:38.489101  218168 main.go:141] libmachine: (addons-020871)       <source network='default'/>
	I1216 10:31:38.489111  218168 main.go:141] libmachine: (addons-020871)       <model type='virtio'/>
	I1216 10:31:38.489121  218168 main.go:141] libmachine: (addons-020871)     </interface>
	I1216 10:31:38.489132  218168 main.go:141] libmachine: (addons-020871)     <serial type='pty'>
	I1216 10:31:38.489145  218168 main.go:141] libmachine: (addons-020871)       <target port='0'/>
	I1216 10:31:38.489167  218168 main.go:141] libmachine: (addons-020871)     </serial>
	I1216 10:31:38.489179  218168 main.go:141] libmachine: (addons-020871)     <console type='pty'>
	I1216 10:31:38.489190  218168 main.go:141] libmachine: (addons-020871)       <target type='serial' port='0'/>
	I1216 10:31:38.489203  218168 main.go:141] libmachine: (addons-020871)     </console>
	I1216 10:31:38.489213  218168 main.go:141] libmachine: (addons-020871)     <rng model='virtio'>
	I1216 10:31:38.489224  218168 main.go:141] libmachine: (addons-020871)       <backend model='random'>/dev/random</backend>
	I1216 10:31:38.489239  218168 main.go:141] libmachine: (addons-020871)     </rng>
	I1216 10:31:38.489251  218168 main.go:141] libmachine: (addons-020871)     
	I1216 10:31:38.489258  218168 main.go:141] libmachine: (addons-020871)     
	I1216 10:31:38.489270  218168 main.go:141] libmachine: (addons-020871)   </devices>
	I1216 10:31:38.489279  218168 main.go:141] libmachine: (addons-020871) </domain>
	I1216 10:31:38.489313  218168 main.go:141] libmachine: (addons-020871) 
	I1216 10:31:38.495034  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:af:86:ab in network default
	I1216 10:31:38.495540  218168 main.go:141] libmachine: (addons-020871) starting domain...
	I1216 10:31:38.495560  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:38.495565  218168 main.go:141] libmachine: (addons-020871) ensuring networks are active...
	I1216 10:31:38.496296  218168 main.go:141] libmachine: (addons-020871) Ensuring network default is active
	I1216 10:31:38.496621  218168 main.go:141] libmachine: (addons-020871) Ensuring network mk-addons-020871 is active
	I1216 10:31:38.497128  218168 main.go:141] libmachine: (addons-020871) getting domain XML...
	I1216 10:31:38.497763  218168 main.go:141] libmachine: (addons-020871) creating domain...
	I1216 10:31:39.907343  218168 main.go:141] libmachine: (addons-020871) waiting for IP...
	I1216 10:31:39.908131  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:39.908659  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:39.908689  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:39.908636  218191 retry.go:31] will retry after 311.745376ms: waiting for domain to come up
	I1216 10:31:40.222162  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:40.222649  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:40.222676  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:40.222623  218191 retry.go:31] will retry after 353.739286ms: waiting for domain to come up
	I1216 10:31:40.578472  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:40.578893  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:40.578935  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:40.578875  218191 retry.go:31] will retry after 384.988826ms: waiting for domain to come up
	I1216 10:31:40.965819  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:40.966402  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:40.966442  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:40.966371  218191 retry.go:31] will retry after 461.65384ms: waiting for domain to come up
	I1216 10:31:41.430075  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:41.430489  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:41.430525  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:41.430468  218191 retry.go:31] will retry after 500.241235ms: waiting for domain to come up
	I1216 10:31:41.932193  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:41.932572  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:41.932599  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:41.932559  218191 retry.go:31] will retry after 705.18908ms: waiting for domain to come up
	I1216 10:31:42.639118  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:42.639593  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:42.639620  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:42.639560  218191 retry.go:31] will retry after 1.064300662s: waiting for domain to come up
	I1216 10:31:43.705582  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:43.706052  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:43.706078  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:43.706014  218191 retry.go:31] will retry after 1.08333648s: waiting for domain to come up
	I1216 10:31:44.790719  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:44.791148  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:44.791174  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:44.791117  218191 retry.go:31] will retry after 1.713698041s: waiting for domain to come up
	I1216 10:31:46.506060  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:46.506525  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:46.506564  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:46.506510  218191 retry.go:31] will retry after 1.515937487s: waiting for domain to come up
	I1216 10:31:48.024268  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:48.024710  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:48.024741  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:48.024681  218191 retry.go:31] will retry after 2.369610901s: waiting for domain to come up
	I1216 10:31:50.397271  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:50.397649  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:50.397681  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:50.397619  218191 retry.go:31] will retry after 2.457466679s: waiting for domain to come up
	I1216 10:31:52.858207  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:52.858676  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:52.858701  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:52.858636  218191 retry.go:31] will retry after 3.867577059s: waiting for domain to come up
	I1216 10:31:56.727503  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:56.727939  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:56.727967  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:56.727899  218191 retry.go:31] will retry after 4.324595651s: waiting for domain to come up
	I1216 10:32:01.056520  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.056947  218168 main.go:141] libmachine: (addons-020871) found domain IP: 192.168.39.206
	I1216 10:32:01.056988  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has current primary IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.056997  218168 main.go:141] libmachine: (addons-020871) reserving static IP address...
	I1216 10:32:01.057427  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find host DHCP lease matching {name: "addons-020871", mac: "52:54:00:2f:5d:dc", ip: "192.168.39.206"} in network mk-addons-020871
	I1216 10:32:01.136665  218168 main.go:141] libmachine: (addons-020871) DBG | Getting to WaitForSSH function...
	I1216 10:32:01.136698  218168 main.go:141] libmachine: (addons-020871) reserved static IP address 192.168.39.206 for domain addons-020871
	I1216 10:32:01.136713  218168 main.go:141] libmachine: (addons-020871) waiting for SSH...
	I1216 10:32:01.139566  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.140104  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.140139  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.140318  218168 main.go:141] libmachine: (addons-020871) DBG | Using SSH client type: external
	I1216 10:32:01.140348  218168 main.go:141] libmachine: (addons-020871) DBG | Using SSH private key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa (-rw-------)
	I1216 10:32:01.140386  218168 main.go:141] libmachine: (addons-020871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 10:32:01.140407  218168 main.go:141] libmachine: (addons-020871) DBG | About to run SSH command:
	I1216 10:32:01.140422  218168 main.go:141] libmachine: (addons-020871) DBG | exit 0
	I1216 10:32:01.265279  218168 main.go:141] libmachine: (addons-020871) DBG | SSH cmd err, output: <nil>: 
	I1216 10:32:01.265552  218168 main.go:141] libmachine: (addons-020871) KVM machine creation complete
	I1216 10:32:01.265952  218168 main.go:141] libmachine: (addons-020871) Calling .GetConfigRaw
	I1216 10:32:01.266616  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:01.266809  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:01.267014  218168 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1216 10:32:01.267032  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:01.268377  218168 main.go:141] libmachine: Detecting operating system of created instance...
	I1216 10:32:01.268396  218168 main.go:141] libmachine: Waiting for SSH to be available...
	I1216 10:32:01.268402  218168 main.go:141] libmachine: Getting to WaitForSSH function...
	I1216 10:32:01.268410  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.270644  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.270993  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.271017  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.271158  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.271363  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.271554  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.271709  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.271883  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:01.272130  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:01.272144  218168 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1216 10:32:01.372377  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 10:32:01.372406  218168 main.go:141] libmachine: Detecting the provisioner...
	I1216 10:32:01.372416  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.375331  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.375676  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.375706  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.375911  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.376117  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.376289  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.376454  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.376597  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:01.376771  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:01.376781  218168 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1216 10:32:01.481775  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1216 10:32:01.481883  218168 main.go:141] libmachine: found compatible host: buildroot
	I1216 10:32:01.481893  218168 main.go:141] libmachine: Provisioning with buildroot...
	I1216 10:32:01.481901  218168 main.go:141] libmachine: (addons-020871) Calling .GetMachineName
	I1216 10:32:01.482209  218168 buildroot.go:166] provisioning hostname "addons-020871"
	I1216 10:32:01.482244  218168 main.go:141] libmachine: (addons-020871) Calling .GetMachineName
	I1216 10:32:01.482503  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.485099  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.485450  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.485480  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.485596  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.485775  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.485934  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.486092  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.486317  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:01.486498  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:01.486512  218168 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-020871 && echo "addons-020871" | sudo tee /etc/hostname
	I1216 10:32:01.603470  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-020871
	
	I1216 10:32:01.603509  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.606372  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.606725  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.606759  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.607007  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.607250  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.607417  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.607514  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.607654  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:01.607843  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:01.607866  218168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-020871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-020871/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-020871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 10:32:01.718115  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 10:32:01.718147  218168 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20107-210204/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-210204/.minikube}
	I1216 10:32:01.718204  218168 buildroot.go:174] setting up certificates
	I1216 10:32:01.718225  218168 provision.go:84] configureAuth start
	I1216 10:32:01.718240  218168 main.go:141] libmachine: (addons-020871) Calling .GetMachineName
	I1216 10:32:01.718553  218168 main.go:141] libmachine: (addons-020871) Calling .GetIP
	I1216 10:32:01.721544  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.721914  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.721939  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.722123  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.724791  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.725201  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.725230  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.725366  218168 provision.go:143] copyHostCerts
	I1216 10:32:01.725448  218168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem (1078 bytes)
	I1216 10:32:01.725596  218168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem (1123 bytes)
	I1216 10:32:01.725691  218168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem (1679 bytes)
	I1216 10:32:01.725776  218168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem org=jenkins.addons-020871 san=[127.0.0.1 192.168.39.206 addons-020871 localhost minikube]
	I1216 10:32:01.805753  218168 provision.go:177] copyRemoteCerts
	I1216 10:32:01.805820  218168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 10:32:01.805848  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.809048  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.809446  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.809477  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.809686  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.809911  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.810078  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.810246  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:01.891456  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 10:32:01.915662  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 10:32:01.940569  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 10:32:01.965262  218168 provision.go:87] duration metric: took 247.018075ms to configureAuth
	I1216 10:32:01.965300  218168 buildroot.go:189] setting minikube options for container-runtime
	I1216 10:32:01.965549  218168 config.go:182] Loaded profile config "addons-020871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:32:01.965665  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.968932  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.969400  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.969436  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.969683  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.969883  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.970048  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.970187  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.970401  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:01.970581  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:01.970595  218168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 10:32:02.190074  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 10:32:02.190101  218168 main.go:141] libmachine: Checking connection to Docker...
	I1216 10:32:02.190110  218168 main.go:141] libmachine: (addons-020871) Calling .GetURL
	I1216 10:32:02.191391  218168 main.go:141] libmachine: (addons-020871) DBG | using libvirt version 6000000
	I1216 10:32:02.193602  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.193990  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.194018  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.194454  218168 main.go:141] libmachine: Docker is up and running!
	I1216 10:32:02.194471  218168 main.go:141] libmachine: Reticulating splines...
	I1216 10:32:02.194483  218168 client.go:171] duration metric: took 24.268188098s to LocalClient.Create
	I1216 10:32:02.194517  218168 start.go:167] duration metric: took 24.268285342s to libmachine.API.Create "addons-020871"
	I1216 10:32:02.194544  218168 start.go:293] postStartSetup for "addons-020871" (driver="kvm2")
	I1216 10:32:02.194561  218168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 10:32:02.194592  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:02.194855  218168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 10:32:02.194889  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:02.197387  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.197712  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.197750  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.197912  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:02.198175  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:02.198345  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:02.198493  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:02.279622  218168 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 10:32:02.284224  218168 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 10:32:02.284258  218168 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/addons for local assets ...
	I1216 10:32:02.284358  218168 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/files for local assets ...
	I1216 10:32:02.284390  218168 start.go:296] duration metric: took 89.836076ms for postStartSetup
	I1216 10:32:02.284438  218168 main.go:141] libmachine: (addons-020871) Calling .GetConfigRaw
	I1216 10:32:02.285205  218168 main.go:141] libmachine: (addons-020871) Calling .GetIP
	I1216 10:32:02.287975  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.288336  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.288362  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.288632  218168 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/config.json ...
	I1216 10:32:02.288841  218168 start.go:128] duration metric: took 24.382241529s to createHost
	I1216 10:32:02.288871  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:02.291317  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.291621  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.291640  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.291821  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:02.292016  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:02.292211  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:02.292375  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:02.292540  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:02.292712  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:02.292721  218168 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 10:32:02.393886  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734345122.370190624
	
	I1216 10:32:02.393919  218168 fix.go:216] guest clock: 1734345122.370190624
	I1216 10:32:02.393927  218168 fix.go:229] Guest: 2024-12-16 10:32:02.370190624 +0000 UTC Remote: 2024-12-16 10:32:02.288857281 +0000 UTC m=+24.498804031 (delta=81.333343ms)
	I1216 10:32:02.393987  218168 fix.go:200] guest clock delta is within tolerance: 81.333343ms
	I1216 10:32:02.393995  218168 start.go:83] releasing machines lock for "addons-020871", held for 24.487500312s
	I1216 10:32:02.394044  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:02.394391  218168 main.go:141] libmachine: (addons-020871) Calling .GetIP
	I1216 10:32:02.397223  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.397531  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.397561  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.397705  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:02.398372  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:02.398595  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:02.398713  218168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 10:32:02.398777  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:02.398909  218168 ssh_runner.go:195] Run: cat /version.json
	I1216 10:32:02.398931  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:02.401953  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.401983  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.402321  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.402344  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.402381  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.402404  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.402499  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:02.402614  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:02.402685  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:02.402758  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:02.402848  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:02.402934  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:02.403000  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:02.403051  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:02.504015  218168 ssh_runner.go:195] Run: systemctl --version
	I1216 10:32:02.510362  218168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 10:32:02.678147  218168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 10:32:02.684462  218168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 10:32:02.684555  218168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 10:32:02.700378  218168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 10:32:02.700422  218168 start.go:495] detecting cgroup driver to use...
	I1216 10:32:02.700497  218168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 10:32:02.718240  218168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 10:32:02.732289  218168 docker.go:217] disabling cri-docker service (if available) ...
	I1216 10:32:02.732390  218168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 10:32:02.746524  218168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 10:32:02.760402  218168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 10:32:02.871922  218168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 10:32:03.013544  218168 docker.go:233] disabling docker service ...
	I1216 10:32:03.013632  218168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 10:32:03.028186  218168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 10:32:03.041790  218168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 10:32:03.189349  218168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 10:32:03.319128  218168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 10:32:03.336881  218168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 10:32:03.357997  218168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 10:32:03.358068  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.368801  218168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 10:32:03.368882  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.379501  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.390004  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.401053  218168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 10:32:03.411996  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.422967  218168 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.440604  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.451723  218168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 10:32:03.461757  218168 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 10:32:03.461826  218168 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 10:32:03.476016  218168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 10:32:03.486214  218168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:32:03.601384  218168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 10:32:03.696632  218168 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 10:32:03.696754  218168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 10:32:03.701692  218168 start.go:563] Will wait 60s for crictl version
	I1216 10:32:03.701777  218168 ssh_runner.go:195] Run: which crictl
	I1216 10:32:03.705740  218168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 10:32:03.743157  218168 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 10:32:03.743299  218168 ssh_runner.go:195] Run: crio --version
	I1216 10:32:03.770395  218168 ssh_runner.go:195] Run: crio --version
	I1216 10:32:03.799779  218168 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1216 10:32:03.801195  218168 main.go:141] libmachine: (addons-020871) Calling .GetIP
	I1216 10:32:03.805053  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:03.805579  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:03.805604  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:03.805946  218168 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 10:32:03.810299  218168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 10:32:03.825080  218168 kubeadm.go:883] updating cluster {Name:addons-020871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-020871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 10:32:03.825232  218168 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:32:03.825291  218168 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 10:32:03.863706  218168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1216 10:32:03.863779  218168 ssh_runner.go:195] Run: which lz4
	I1216 10:32:03.868037  218168 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 10:32:03.872594  218168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 10:32:03.872633  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1216 10:32:05.105305  218168 crio.go:462] duration metric: took 1.237299066s to copy over tarball
	I1216 10:32:05.105397  218168 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 10:32:07.273674  218168 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.168247404s)
	I1216 10:32:07.273704  218168 crio.go:469] duration metric: took 2.168362347s to extract the tarball
	I1216 10:32:07.273719  218168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 10:32:07.310695  218168 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 10:32:07.351085  218168 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 10:32:07.351114  218168 cache_images.go:84] Images are preloaded, skipping loading
	I1216 10:32:07.351122  218168 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.2 crio true true} ...
	I1216 10:32:07.351250  218168 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-020871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-020871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 10:32:07.351319  218168 ssh_runner.go:195] Run: crio config
	I1216 10:32:07.395680  218168 cni.go:84] Creating CNI manager for ""
	I1216 10:32:07.395704  218168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 10:32:07.395718  218168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 10:32:07.395747  218168 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-020871 NodeName:addons-020871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 10:32:07.395894  218168 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-020871"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 10:32:07.395975  218168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1216 10:32:07.405401  218168 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 10:32:07.405501  218168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 10:32:07.414266  218168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1216 10:32:07.431082  218168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 10:32:07.448649  218168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1216 10:32:07.466874  218168 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1216 10:32:07.470995  218168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 10:32:07.483471  218168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:32:07.618146  218168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 10:32:07.638427  218168 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871 for IP: 192.168.39.206
	I1216 10:32:07.638459  218168 certs.go:194] generating shared ca certs ...
	I1216 10:32:07.638478  218168 certs.go:226] acquiring lock for ca certs: {Name:mkd97e0b0fea726104cb674caf81491f1144961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:07.638621  218168 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key
	I1216 10:32:07.945451  218168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt ...
	I1216 10:32:07.945491  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt: {Name:mk1b7e8e8343576c2625ea5df4c030990d1ed65c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:07.945686  218168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key ...
	I1216 10:32:07.945698  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key: {Name:mk150bb71a4d8bf2f7e593f850c268c3c5fb2826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:07.945776  218168 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key
	I1216 10:32:08.109781  218168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt ...
	I1216 10:32:08.109815  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt: {Name:mk7e569a459979b9ea3d41410c35f8efe6998d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.109997  218168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key ...
	I1216 10:32:08.110009  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key: {Name:mk7ecd3ed39b16ddc6e66b6c0ea0b6c9210b002b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.110079  218168 certs.go:256] generating profile certs ...
	I1216 10:32:08.110139  218168 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.key
	I1216 10:32:08.110153  218168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt with IP's: []
	I1216 10:32:08.285579  218168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt ...
	I1216 10:32:08.285620  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: {Name:mkd08a22d82c4cc9512ecba9ceb09ba16c728d8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.285806  218168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.key ...
	I1216 10:32:08.285818  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.key: {Name:mk7276986ea3e6e4bc9c4fe350372f9761df7065 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.285886  218168 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key.50f7fc57
	I1216 10:32:08.285905  218168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt.50f7fc57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206]
	I1216 10:32:08.541997  218168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt.50f7fc57 ...
	I1216 10:32:08.542035  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt.50f7fc57: {Name:mkd8f73129f025770e82dc30cd4115ec508353a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.542203  218168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key.50f7fc57 ...
	I1216 10:32:08.542216  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key.50f7fc57: {Name:mk370ffb9ac91a9357cf5a90ed38d9a141605ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.542297  218168 certs.go:381] copying /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt.50f7fc57 -> /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt
	I1216 10:32:08.542374  218168 certs.go:385] copying /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key.50f7fc57 -> /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key
	I1216 10:32:08.542470  218168 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.key
	I1216 10:32:08.542506  218168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.crt with IP's: []
	I1216 10:32:08.723196  218168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.crt ...
	I1216 10:32:08.723234  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.crt: {Name:mkacaad176092c140f7a012d05a90c00be134aac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.723407  218168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.key ...
	I1216 10:32:08.723421  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.key: {Name:mk860f21b3c7b9d6776d96c559f45e802c46a833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.723596  218168 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 10:32:08.723635  218168 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem (1078 bytes)
	I1216 10:32:08.723660  218168 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem (1123 bytes)
	I1216 10:32:08.723686  218168 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem (1679 bytes)
	I1216 10:32:08.724358  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 10:32:08.753250  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 10:32:08.777782  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 10:32:08.802474  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 10:32:08.827536  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 10:32:08.852092  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 10:32:08.877377  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 10:32:08.901431  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 10:32:08.926233  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 10:32:08.950234  218168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 10:32:08.966758  218168 ssh_runner.go:195] Run: openssl version
	I1216 10:32:08.972276  218168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 10:32:08.982674  218168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:08.986974  218168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:08.987041  218168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:08.992647  218168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 10:32:09.003085  218168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 10:32:09.007224  218168 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 10:32:09.007289  218168 kubeadm.go:392] StartCluster: {Name:addons-020871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-020871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:32:09.007381  218168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 10:32:09.007434  218168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 10:32:09.046100  218168 cri.go:89] found id: ""
	I1216 10:32:09.046179  218168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 10:32:09.057442  218168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 10:32:09.066845  218168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 10:32:09.076105  218168 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 10:32:09.076128  218168 kubeadm.go:157] found existing configuration files:
	
	I1216 10:32:09.076175  218168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 10:32:09.084912  218168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 10:32:09.084982  218168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 10:32:09.094406  218168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 10:32:09.103008  218168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 10:32:09.103070  218168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 10:32:09.111943  218168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 10:32:09.120703  218168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 10:32:09.120787  218168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 10:32:09.129883  218168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 10:32:09.138823  218168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 10:32:09.138887  218168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 10:32:09.148252  218168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 10:32:09.197543  218168 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1216 10:32:09.197656  218168 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 10:32:09.302443  218168 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 10:32:09.302605  218168 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 10:32:09.302751  218168 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 10:32:09.310329  218168 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 10:32:09.443780  218168 out.go:235]   - Generating certificates and keys ...
	I1216 10:32:09.443932  218168 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 10:32:09.444028  218168 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 10:32:09.469694  218168 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 10:32:09.627018  218168 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 10:32:09.763666  218168 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 10:32:09.924584  218168 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 10:32:09.995439  218168 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 10:32:09.995632  218168 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-020871 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1216 10:32:10.377385  218168 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 10:32:10.377695  218168 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-020871 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1216 10:32:10.550043  218168 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 10:32:10.847457  218168 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 10:32:11.009055  218168 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 10:32:11.009235  218168 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 10:32:11.165166  218168 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 10:32:12.065836  218168 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 10:32:12.562745  218168 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 10:32:12.808599  218168 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 10:32:12.994204  218168 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 10:32:12.994920  218168 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 10:32:12.997524  218168 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 10:32:12.999294  218168 out.go:235]   - Booting up control plane ...
	I1216 10:32:12.999435  218168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 10:32:12.999557  218168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 10:32:12.999667  218168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 10:32:13.015097  218168 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 10:32:13.021271  218168 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 10:32:13.021361  218168 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 10:32:13.149175  218168 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 10:32:13.149314  218168 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 10:32:13.650446  218168 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.879834ms
	I1216 10:32:13.650558  218168 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 10:32:18.649823  218168 kubeadm.go:310] [api-check] The API server is healthy after 5.002069951s
	I1216 10:32:18.661734  218168 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 10:32:18.673165  218168 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 10:32:18.705864  218168 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 10:32:18.706066  218168 kubeadm.go:310] [mark-control-plane] Marking the node addons-020871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 10:32:18.723492  218168 kubeadm.go:310] [bootstrap-token] Using token: ziqwd1.bjhky6co4258z758
	I1216 10:32:18.724885  218168 out.go:235]   - Configuring RBAC rules ...
	I1216 10:32:18.725056  218168 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 10:32:18.733707  218168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 10:32:18.741920  218168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 10:32:18.746087  218168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 10:32:18.749427  218168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 10:32:18.754358  218168 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 10:32:19.057987  218168 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 10:32:19.486197  218168 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 10:32:20.057446  218168 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 10:32:20.058208  218168 kubeadm.go:310] 
	I1216 10:32:20.058288  218168 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 10:32:20.058330  218168 kubeadm.go:310] 
	I1216 10:32:20.058471  218168 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 10:32:20.058483  218168 kubeadm.go:310] 
	I1216 10:32:20.058521  218168 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 10:32:20.058605  218168 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 10:32:20.058703  218168 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 10:32:20.058725  218168 kubeadm.go:310] 
	I1216 10:32:20.058809  218168 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 10:32:20.058817  218168 kubeadm.go:310] 
	I1216 10:32:20.058895  218168 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 10:32:20.058909  218168 kubeadm.go:310] 
	I1216 10:32:20.058990  218168 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 10:32:20.059096  218168 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 10:32:20.059197  218168 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 10:32:20.059207  218168 kubeadm.go:310] 
	I1216 10:32:20.059325  218168 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 10:32:20.059436  218168 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 10:32:20.059445  218168 kubeadm.go:310] 
	I1216 10:32:20.059551  218168 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ziqwd1.bjhky6co4258z758 \
	I1216 10:32:20.059717  218168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:308d0b4d152056f6ea26bd81937e5552170019dfa040d2f99e94fa77bd33e210 \
	I1216 10:32:20.059758  218168 kubeadm.go:310] 	--control-plane 
	I1216 10:32:20.059769  218168 kubeadm.go:310] 
	I1216 10:32:20.059885  218168 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 10:32:20.059895  218168 kubeadm.go:310] 
	I1216 10:32:20.060134  218168 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ziqwd1.bjhky6co4258z758 \
	I1216 10:32:20.060270  218168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:308d0b4d152056f6ea26bd81937e5552170019dfa040d2f99e94fa77bd33e210 
	I1216 10:32:20.060912  218168 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 10:32:20.061053  218168 cni.go:84] Creating CNI manager for ""
	I1216 10:32:20.061069  218168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 10:32:20.063059  218168 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 10:32:20.064373  218168 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 10:32:20.074492  218168 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 10:32:20.094996  218168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 10:32:20.095101  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:20.095136  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-020871 minikube.k8s.io/updated_at=2024_12_16T10_32_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8 minikube.k8s.io/name=addons-020871 minikube.k8s.io/primary=true
	I1216 10:32:20.118932  218168 ops.go:34] apiserver oom_adj: -16
	I1216 10:32:20.245820  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:20.746190  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:21.246196  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:21.746565  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:22.246196  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:22.745963  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:23.246513  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:23.746809  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:24.245981  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:24.746333  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:24.880496  218168 kubeadm.go:1113] duration metric: took 4.785479938s to wait for elevateKubeSystemPrivileges
	I1216 10:32:24.880551  218168 kubeadm.go:394] duration metric: took 15.873268149s to StartCluster
	I1216 10:32:24.880578  218168 settings.go:142] acquiring lock: {Name:mk3e7a5ea729d05e36deedcd45fd7829ccafa72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:24.880735  218168 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 10:32:24.881342  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/kubeconfig: {Name:mk57b3cf865454cf7d68479ab0b998d52a5c91e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:24.881628  218168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 10:32:24.881639  218168 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 10:32:24.881731  218168 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 10:32:24.881870  218168 config.go:182] Loaded profile config "addons-020871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:32:24.881882  218168 addons.go:69] Setting yakd=true in profile "addons-020871"
	I1216 10:32:24.881894  218168 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-020871"
	I1216 10:32:24.881905  218168 addons.go:234] Setting addon yakd=true in "addons-020871"
	I1216 10:32:24.881993  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.881908  218168 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-020871"
	I1216 10:32:24.882105  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.881913  218168 addons.go:69] Setting storage-provisioner=true in profile "addons-020871"
	I1216 10:32:24.882176  218168 addons.go:234] Setting addon storage-provisioner=true in "addons-020871"
	I1216 10:32:24.882211  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.881919  218168 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-020871"
	I1216 10:32:24.882250  218168 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-020871"
	I1216 10:32:24.882294  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.881919  218168 addons.go:69] Setting registry=true in profile "addons-020871"
	I1216 10:32:24.882344  218168 addons.go:234] Setting addon registry=true in "addons-020871"
	I1216 10:32:24.882389  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.882506  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.882554  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.882564  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.882593  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.882644  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.882644  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.882670  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.882815  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.882862  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.882819  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.881927  218168 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-020871"
	I1216 10:32:24.881930  218168 addons.go:69] Setting volcano=true in profile "addons-020871"
	I1216 10:32:24.883224  218168 addons.go:234] Setting addon volcano=true in "addons-020871"
	I1216 10:32:24.883289  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.881932  218168 addons.go:69] Setting ingress=true in profile "addons-020871"
	I1216 10:32:24.883339  218168 addons.go:234] Setting addon ingress=true in "addons-020871"
	I1216 10:32:24.883376  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.883668  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.883697  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.883736  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.881935  218168 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-020871"
	I1216 10:32:24.883758  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.883778  218168 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-020871"
	I1216 10:32:24.883898  218168 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-020871"
	I1216 10:32:24.881938  218168 addons.go:69] Setting ingress-dns=true in profile "addons-020871"
	I1216 10:32:24.884002  218168 addons.go:234] Setting addon ingress-dns=true in "addons-020871"
	I1216 10:32:24.881876  218168 addons.go:69] Setting cloud-spanner=true in profile "addons-020871"
	I1216 10:32:24.881941  218168 addons.go:69] Setting volumesnapshots=true in profile "addons-020871"
	I1216 10:32:24.881941  218168 addons.go:69] Setting default-storageclass=true in profile "addons-020871"
	I1216 10:32:24.881933  218168 addons.go:69] Setting gcp-auth=true in profile "addons-020871"
	I1216 10:32:24.881944  218168 addons.go:69] Setting inspektor-gadget=true in profile "addons-020871"
	I1216 10:32:24.881910  218168 addons.go:69] Setting metrics-server=true in profile "addons-020871"
	I1216 10:32:24.884110  218168 addons.go:234] Setting addon metrics-server=true in "addons-020871"
	I1216 10:32:24.884127  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.884165  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.884177  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.884198  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.884391  218168 out.go:177] * Verifying Kubernetes components...
	I1216 10:32:24.884522  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.884554  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.884578  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.884620  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.884683  218168 mustload.go:65] Loading cluster: addons-020871
	I1216 10:32:24.884700  218168 addons.go:234] Setting addon volumesnapshots=true in "addons-020871"
	I1216 10:32:24.884724  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.884683  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.884786  218168 addons.go:234] Setting addon inspektor-gadget=true in "addons-020871"
	I1216 10:32:24.885010  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.885285  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.885308  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.885391  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.885413  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.884089  218168 addons.go:234] Setting addon cloud-spanner=true in "addons-020871"
	I1216 10:32:24.885493  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.884806  218168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-020871"
	I1216 10:32:24.890057  218168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:32:24.890436  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.890503  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.904708  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34377
	I1216 10:32:24.905224  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.905460  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39343
	I1216 10:32:24.905767  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.905783  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.905992  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.906194  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.906513  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.906532  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.906876  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.906916  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.908663  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46863
	I1216 10:32:24.908683  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.908900  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.917559  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.917628  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.918289  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.918332  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.918852  218168 config.go:182] Loaded profile config "addons-020871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:32:24.918996  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44869
	I1216 10:32:24.919099  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45437
	I1216 10:32:24.919164  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I1216 10:32:24.919226  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I1216 10:32:24.919224  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.919265  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.920460  218168 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-020871"
	I1216 10:32:24.920514  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.920886  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.920923  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.922235  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.922382  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.922454  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.922511  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.922566  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.922625  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1216 10:32:24.923385  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.923413  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.923505  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.923623  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.923641  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.923653  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.923663  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.923785  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.923790  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.923796  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.923802  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.923863  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.925013  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.925078  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.925098  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.925101  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.925081  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.925179  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.925546  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.925584  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.925609  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.925626  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.925830  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.925861  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.926354  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.926381  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.926532  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.944012  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34237
	I1216 10:32:24.944132  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45383
	I1216 10:32:24.944654  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.944657  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.945276  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.945301  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.945432  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.945454  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.945722  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.945888  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.946338  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.946395  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.946510  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.946538  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.955975  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I1216 10:32:24.956664  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.957346  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.957367  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.957792  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.957985  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.958206  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35703
	I1216 10:32:24.959254  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.960025  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.960045  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.960124  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I1216 10:32:24.960573  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37445
	I1216 10:32:24.960746  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.961148  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.961350  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.961363  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.961447  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.961521  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.961572  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.961888  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.962077  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.962110  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.962464  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.962502  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.962817  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:24.964905  218168 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 10:32:24.965924  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.966398  218168 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 10:32:24.966422  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 10:32:24.966448  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:24.966950  218168 addons.go:234] Setting addon default-storageclass=true in "addons-020871"
	I1216 10:32:24.966990  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.967380  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.967432  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.968757  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44815
	I1216 10:32:24.969371  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.969977  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.969995  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.970054  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.970658  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:24.970674  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.970679  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.970926  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:24.971275  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.971344  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.971660  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:24.971883  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:24.972034  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:24.972278  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.972303  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.972650  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.973169  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.973214  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.973868  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I1216 10:32:24.974379  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.974993  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.975015  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.975427  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.976028  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.976073  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.979001  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I1216 10:32:24.979616  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.980186  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.980217  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.980598  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.980771  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.982757  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:24.984458  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45287
	I1216 10:32:24.985070  218168 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 10:32:24.985166  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.985887  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.985909  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.986334  218168 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 10:32:24.986356  218168 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 10:32:24.986379  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:24.986595  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.986847  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.988329  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36515
	I1216 10:32:24.988745  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.989720  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:24.991211  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.991501  218168 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1216 10:32:24.991690  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:24.991719  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.992088  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:24.992284  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:24.992490  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:24.992682  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:24.992859  218168 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1216 10:32:24.992874  218168 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1216 10:32:24.992904  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:24.993540  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37689
	I1216 10:32:24.993982  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.994651  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.994670  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.995285  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.995680  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.997515  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:24.997763  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.998252  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:24.998282  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.998421  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:24.998592  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:24.998750  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:24.998878  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:24.999341  218168 out.go:177]   - Using image docker.io/busybox:stable
	I1216 10:32:25.000152  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43547
	I1216 10:32:25.000644  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.001349  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.001375  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.001852  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.001883  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.002248  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.002408  218168 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 10:32:25.002502  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.002721  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.003882  218168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 10:32:25.003904  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 10:32:25.003928  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.004211  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38353
	I1216 10:32:25.004820  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.005578  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.005592  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.005660  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I1216 10:32:25.006154  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.006279  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.006538  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.006822  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.006841  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.007270  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.007492  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.007549  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.007562  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.007715  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.007862  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.008054  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.008223  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.008404  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.009675  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.010214  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:25.010299  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:25.010691  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:25.011155  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:25.011198  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:25.011417  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I1216 10:32:25.011777  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 10:32:25.011919  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.012709  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.012729  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.012995  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 10:32:25.013008  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.013016  218168 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 10:32:25.013036  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.013182  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.013531  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.014896  218168 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1216 10:32:25.015700  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.016185  218168 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 10:32:25.016203  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 10:32:25.016226  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.016529  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.016555  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.016573  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.016614  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.017020  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.017285  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.017460  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.017963  218168 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1216 10:32:25.018090  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I1216 10:32:25.019056  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.019282  218168 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 10:32:25.019301  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1216 10:32:25.019323  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.019612  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.019843  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.019861  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.019993  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.020021  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.020122  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.020380  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.020553  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.020743  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.021533  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.021993  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.022855  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.023313  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.023332  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.023406  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.023591  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.023754  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.023917  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.025399  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.027354  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 10:32:25.028674  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 10:32:25.030046  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 10:32:25.031487  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 10:32:25.034553  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41299
	I1216 10:32:25.034816  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 10:32:25.035328  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.036074  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.036104  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.036642  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.036847  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.037849  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 10:32:25.039032  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.039266  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:25.039284  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:25.039519  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:25.039545  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:25.039552  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:25.039560  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:25.039566  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:25.040568  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46119
	I1216 10:32:25.041817  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:25.041863  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:25.041883  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	W1216 10:32:25.041998  218168 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 10:32:25.042012  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.042022  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I1216 10:32:25.042406  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 10:32:25.043038  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34423
	I1216 10:32:25.043177  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
	I1216 10:32:25.043180  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.043263  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.043309  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.043726  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.044056  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.044074  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.045129  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I1216 10:32:25.045143  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.045369  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.045591  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.045801  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.046030  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.046112  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.046127  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.046235  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.046391  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.046412  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.046646  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.046749  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 10:32:25.046800  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.046851  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.048262  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 10:32:25.048282  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 10:32:25.048315  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.048406  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.048441  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.048521  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.048560  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.048575  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.048921  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.049054  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I1216 10:32:25.049132  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.050090  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.050364  218168 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1216 10:32:25.050793  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.050814  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.051205  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.051293  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.051823  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.051917  218168 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1216 10:32:25.051870  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:25.052720  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:25.053012  218168 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 10:32:25.053019  218168 out.go:177]   - Using image docker.io/registry:2.8.3
	I1216 10:32:25.053536  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.054129  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.054151  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.054365  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I1216 10:32:25.054414  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.054366  218168 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1216 10:32:25.054562  218168 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 10:32:25.054775  218168 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 10:32:25.054798  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.054665  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.054928  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.055492  218168 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 10:32:25.055508  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 10:32:25.055524  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.056043  218168 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1216 10:32:25.056058  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 10:32:25.056074  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.057159  218168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 10:32:25.057175  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 10:32:25.057194  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.057282  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.060636  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.063911  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.063927  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.063947  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.063955  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.063979  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.063997  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.064001  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.064395  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.064678  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.065411  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.065437  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.065480  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.065792  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.065824  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.065845  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.065864  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.065881  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.065895  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.066080  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.066158  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.066252  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.066309  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.066576  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.066846  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.067532  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.067552  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.068068  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.068343  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.069188  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.071332  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.071541  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.071696  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.072607  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.074379  218168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1216 10:32:25.075670  218168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:32:25.077014  218168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:32:25.078469  218168 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 10:32:25.078528  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 10:32:25.078559  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.081670  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I1216 10:32:25.082375  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.082969  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.083010  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.083214  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.083397  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.083506  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.083614  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.083688  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.084493  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.084554  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.084918  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.085161  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.087113  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.087405  218168 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 10:32:25.087425  218168 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 10:32:25.087445  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.090135  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.090493  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.090516  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.090653  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.090828  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.090991  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.091136  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.333021  218168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 10:32:25.333194  218168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 10:32:25.467825  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 10:32:25.474263  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 10:32:25.497905  218168 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 10:32:25.497933  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1216 10:32:25.500771  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 10:32:25.513874  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 10:32:25.531484  218168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 10:32:25.531525  218168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 10:32:25.543702  218168 node_ready.go:35] waiting up to 6m0s for node "addons-020871" to be "Ready" ...
	I1216 10:32:25.547117  218168 node_ready.go:49] node "addons-020871" has status "Ready":"True"
	I1216 10:32:25.547158  218168 node_ready.go:38] duration metric: took 3.394207ms for node "addons-020871" to be "Ready" ...
	I1216 10:32:25.547173  218168 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 10:32:25.553613  218168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 10:32:25.553647  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 10:32:25.556130  218168 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n8thf" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:25.587567  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 10:32:25.616258  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 10:32:25.616684  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 10:32:25.618001  218168 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 10:32:25.618030  218168 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 10:32:25.619368  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 10:32:25.648280  218168 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 10:32:25.648331  218168 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 10:32:25.648578  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 10:32:25.648611  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 10:32:25.691117  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 10:32:25.755514  218168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 10:32:25.755551  218168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 10:32:25.759590  218168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 10:32:25.759626  218168 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 10:32:25.795202  218168 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 10:32:25.795235  218168 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 10:32:25.890128  218168 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 10:32:25.890154  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 10:32:25.904286  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 10:32:25.904319  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 10:32:25.951937  218168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 10:32:25.951971  218168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 10:32:25.958349  218168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 10:32:25.958388  218168 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 10:32:26.012457  218168 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 10:32:26.012516  218168 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 10:32:26.130613  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 10:32:26.157201  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 10:32:26.157232  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 10:32:26.225869  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 10:32:26.231315  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 10:32:26.231353  218168 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 10:32:26.253545  218168 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 10:32:26.253580  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 10:32:26.371236  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 10:32:26.371282  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 10:32:26.413295  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 10:32:26.524828  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 10:32:26.524868  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 10:32:26.604273  218168 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:32:26.604316  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 10:32:26.949259  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 10:32:26.949286  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 10:32:26.968163  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:32:27.200290  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 10:32:27.200333  218168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 10:32:27.459041  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 10:32:27.459065  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 10:32:27.562683  218168 pod_ready.go:103] pod "coredns-7c65d6cfc9-n8thf" in "kube-system" namespace has status "Ready":"False"
	I1216 10:32:27.696690  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 10:32:27.696723  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 10:32:27.886888  218168 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.553643166s)
	I1216 10:32:27.886942  218168 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1216 10:32:27.886945  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.419077214s)
	I1216 10:32:27.887004  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:27.887024  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:27.887457  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:27.887457  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:27.887491  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:27.887504  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:27.887511  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:27.887801  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:27.887820  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:28.152480  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 10:32:28.152515  218168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 10:32:28.409428  218168 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-020871" context rescaled to 1 replicas
	I1216 10:32:28.411554  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 10:32:29.720714  218168 pod_ready.go:103] pod "coredns-7c65d6cfc9-n8thf" in "kube-system" namespace has status "Ready":"False"
	I1216 10:32:30.135292  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.634471782s)
	I1216 10:32:30.135317  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.621405398s)
	I1216 10:32:30.135362  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.135368  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.135376  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.135381  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.135391  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.547788151s)
	I1216 10:32:30.135301  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.660990053s)
	I1216 10:32:30.135434  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.135446  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.135460  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.135477  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.135849  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.135855  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.135888  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.135899  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.135907  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.135911  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.135915  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.135891  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.135993  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136004  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136012  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.136011  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136020  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.136021  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136031  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.136039  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.136130  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136140  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136149  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.136157  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.136236  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.136265  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136272  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136351  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136361  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136418  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.136454  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136463  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136915  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.136996  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.137031  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.256774  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.256807  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.257285  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.257310  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.257314  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:31.565891  218168 pod_ready.go:93] pod "coredns-7c65d6cfc9-n8thf" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:31.565918  218168 pod_ready.go:82] duration metric: took 6.009748869s for pod "coredns-7c65d6cfc9-n8thf" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:31.565929  218168 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqhl8" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:32.082725  218168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 10:32:32.082766  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:32.086188  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:32.086704  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:32.086737  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:32.086918  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:32.087217  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:32.087406  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:32.087616  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:32.645727  218168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 10:32:32.760429  218168 addons.go:234] Setting addon gcp-auth=true in "addons-020871"
	I1216 10:32:32.760500  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:32.760823  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:32.760869  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:32.777269  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32861
	I1216 10:32:32.777914  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:32.778464  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:32.778486  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:32.778910  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:32.779594  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:32.779628  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:32.796125  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
	I1216 10:32:32.796728  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:32.797279  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:32.797310  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:32.797691  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:32.797943  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:32.799661  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:32.799916  218168 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 10:32:32.799945  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:32.802800  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:32.803183  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:32.803218  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:32.803402  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:32.803616  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:32.803784  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:32.803968  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:33.521626  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.905317422s)
	I1216 10:32:33.521691  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.521689  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.904974983s)
	I1216 10:32:33.521705  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.521733  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.521752  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.521729  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.902328261s)
	I1216 10:32:33.521822  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.830679561s)
	I1216 10:32:33.521833  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.521843  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.521854  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.521864  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.521956  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.391289284s)
	I1216 10:32:33.521994  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.522012  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.522139  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.296231219s)
	I1216 10:32:33.522161  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.522170  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.522213  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.108875995s)
	I1216 10:32:33.522241  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.522257  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.522704  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.522710  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.554119846s)
	I1216 10:32:33.522744  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.522751  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.522758  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.522765  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	W1216 10:32:33.522835  218168 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 10:32:33.522880  218168 retry.go:31] will retry after 224.70682ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 10:32:33.522927  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.522946  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.522976  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.522983  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.522991  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.522999  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.523125  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.523172  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.523196  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.523254  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.523532  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.523548  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.523555  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.523561  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.523198  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.523229  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.523218  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.523930  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.523941  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.523948  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.523227  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.523178  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.523988  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.523998  218168 addons.go:475] Verifying addon metrics-server=true in "addons-020871"
	I1216 10:32:33.523613  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.523656  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.523678  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.524365  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.523698  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.524397  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.524376  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.524441  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.526476  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.526481  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.526497  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.526507  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.526516  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.526522  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.526537  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.526543  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.526552  218168 addons.go:475] Verifying addon ingress=true in "addons-020871"
	I1216 10:32:33.526817  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.526829  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.526837  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.526859  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.526866  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.526945  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.527260  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.527298  218168 addons.go:475] Verifying addon registry=true in "addons-020871"
	I1216 10:32:33.526950  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.527378  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.526968  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.526990  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.528627  218168 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-020871 service yakd-dashboard -n yakd-dashboard
	
	I1216 10:32:33.528650  218168 out.go:177] * Verifying ingress addon...
	I1216 10:32:33.528631  218168 out.go:177] * Verifying registry addon...
	I1216 10:32:33.530727  218168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 10:32:33.530727  218168 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 10:32:33.544670  218168 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 10:32:33.544691  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:33.544743  218168 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 10:32:33.544763  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:33.585912  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.585935  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.586180  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.586233  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.586244  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.592509  218168 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqhl8" in "kube-system" namespace has status "Ready":"False"
	I1216 10:32:33.748611  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:32:34.037801  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:34.038799  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:34.558434  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:34.559036  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:34.877474  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.465862141s)
	I1216 10:32:34.877548  218168 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.077591739s)
	I1216 10:32:34.877577  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:34.877597  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:34.877939  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:34.877962  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:34.877971  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:34.877982  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:34.877991  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:34.878222  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:34.878281  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:34.878242  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:34.878297  218168 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-020871"
	I1216 10:32:34.879402  218168 out.go:177] * Verifying csi-hostpath-driver addon...
	I1216 10:32:34.879416  218168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:32:34.881327  218168 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 10:32:34.882095  218168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 10:32:34.882496  218168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 10:32:34.882517  218168 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 10:32:34.924904  218168 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 10:32:34.924943  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:34.931853  218168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 10:32:34.931886  218168 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 10:32:34.988474  218168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 10:32:34.988510  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 10:32:35.030051  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 10:32:35.035470  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:35.035671  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:35.387382  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:35.536061  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:35.536070  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:35.626606  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.877932859s)
	I1216 10:32:35.626675  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:35.626689  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:35.627042  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:35.627067  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:35.627079  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:35.627089  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:35.627088  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:35.627335  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:35.627357  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:35.887855  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:36.036141  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:36.036557  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:36.072508  218168 pod_ready.go:98] pod "coredns-7c65d6cfc9-nqhl8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:35 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.206 HostIPs:[{IP:192.168.39
.206}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-12-16 10:32:24 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-16 10:32:29 +0000 UTC,FinishedAt:2024-12-16 10:32:34 +0000 UTC,ContainerID:cri-o://31fc3084d77e610eafcdd7b9e9bf9c2bf827ab3aa3bbd54deb348b94d6edc242,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://31fc3084d77e610eafcdd7b9e9bf9c2bf827ab3aa3bbd54deb348b94d6edc242 Started:0xc0029236d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0028d7b50} {Name:kube-api-access-898fn MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0028d7b60}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1216 10:32:36.072540  218168 pod_ready.go:82] duration metric: took 4.506604443s for pod "coredns-7c65d6cfc9-nqhl8" in "kube-system" namespace to be "Ready" ...
	E1216 10:32:36.072550  218168 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-nqhl8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:35 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.206 HostIPs:[{IP:192.168.39.206}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-12-16 10:32:24 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-16 10:32:29 +0000 UTC,FinishedAt:2024-12-16 10:32:34 +0000 UTC,ContainerID:cri-o://31fc3084d77e610eafcdd7b9e9bf9c2bf827ab3aa3bbd54deb348b94d6edc242,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://31fc3084d77e610eafcdd7b9e9bf9c2bf827ab3aa3bbd54deb348b94d6edc242 Started:0xc0029236d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0028d7b50} {Name:kube-api-access-898fn MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0028d7b60}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1216 10:32:36.072569  218168 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.076670  218168 pod_ready.go:93] pod "etcd-addons-020871" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:36.076694  218168 pod_ready.go:82] duration metric: took 4.116304ms for pod "etcd-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.076706  218168 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.080478  218168 pod_ready.go:93] pod "kube-apiserver-addons-020871" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:36.080498  218168 pod_ready.go:82] duration metric: took 3.785307ms for pod "kube-apiserver-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.080506  218168 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.084591  218168 pod_ready.go:93] pod "kube-controller-manager-addons-020871" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:36.084615  218168 pod_ready.go:82] duration metric: took 4.103725ms for pod "kube-controller-manager-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.084624  218168 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n22fm" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.090762  218168 pod_ready.go:93] pod "kube-proxy-n22fm" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:36.090784  218168 pod_ready.go:82] duration metric: took 6.154015ms for pod "kube-proxy-n22fm" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.090793  218168 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.318568  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.288468806s)
	I1216 10:32:36.318636  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:36.318647  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:36.318940  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:36.318963  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:36.318974  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:36.318988  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:36.319003  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:36.319202  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:36.319233  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:36.321941  218168 addons.go:475] Verifying addon gcp-auth=true in "addons-020871"
	I1216 10:32:36.323473  218168 out.go:177] * Verifying gcp-auth addon...
	I1216 10:32:36.325937  218168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 10:32:36.340142  218168 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 10:32:36.340168  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:36.398866  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:36.471315  218168 pod_ready.go:93] pod "kube-scheduler-addons-020871" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:36.471359  218168 pod_ready.go:82] duration metric: took 380.549574ms for pod "kube-scheduler-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.471371  218168 pod_ready.go:39] duration metric: took 10.924178884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 10:32:36.471392  218168 api_server.go:52] waiting for apiserver process to appear ...
	I1216 10:32:36.471475  218168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 10:32:36.518035  218168 api_server.go:72] duration metric: took 11.636330261s to wait for apiserver process to appear ...
	I1216 10:32:36.518070  218168 api_server.go:88] waiting for apiserver healthz status ...
	I1216 10:32:36.518091  218168 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1216 10:32:36.523687  218168 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1216 10:32:36.524703  218168 api_server.go:141] control plane version: v1.31.2
	I1216 10:32:36.524727  218168 api_server.go:131] duration metric: took 6.651151ms to wait for apiserver health ...
	I1216 10:32:36.524735  218168 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 10:32:36.546772  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:36.547018  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:36.675911  218168 system_pods.go:59] 18 kube-system pods found
	I1216 10:32:36.675948  218168 system_pods.go:61] "amd-gpu-device-plugin-5mpr5" [af2c6f6b-1b17-4d42-8958-26458e2900e4] Running
	I1216 10:32:36.675953  218168 system_pods.go:61] "coredns-7c65d6cfc9-n8thf" [dc914613-3264-4abb-8a01-5194512e0048] Running
	I1216 10:32:36.675961  218168 system_pods.go:61] "csi-hostpath-attacher-0" [ade3b0d6-039c-4252-be2d-5f4ce1376484] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 10:32:36.675967  218168 system_pods.go:61] "csi-hostpath-resizer-0" [3d3fd497-6347-4f6b-8e24-bda139222416] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 10:32:36.675976  218168 system_pods.go:61] "csi-hostpathplugin-6mvv7" [562bd994-9c64-4aaf-9ebf-9e8a574500d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 10:32:36.675980  218168 system_pods.go:61] "etcd-addons-020871" [9c41352e-5426-4d0a-beec-337bdcd099e7] Running
	I1216 10:32:36.675984  218168 system_pods.go:61] "kube-apiserver-addons-020871" [c040845d-a416-4e47-8810-49f88b739d44] Running
	I1216 10:32:36.675989  218168 system_pods.go:61] "kube-controller-manager-addons-020871" [f1e17c78-352b-498b-b5a7-e74421fb61c8] Running
	I1216 10:32:36.675994  218168 system_pods.go:61] "kube-ingress-dns-minikube" [a4893ee0-36ca-4cb9-a751-c2ffdd5daf75] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 10:32:36.675998  218168 system_pods.go:61] "kube-proxy-n22fm" [963da550-43ba-48fd-8b0e-76fc08650c48] Running
	I1216 10:32:36.676003  218168 system_pods.go:61] "kube-scheduler-addons-020871" [49126848-7cf8-4f5a-acde-98a68986ee26] Running
	I1216 10:32:36.676007  218168 system_pods.go:61] "metrics-server-84c5f94fbc-lk9mr" [fe81a6d6-63fe-417e-9b7d-9047da33acbf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 10:32:36.676014  218168 system_pods.go:61] "nvidia-device-plugin-daemonset-z7nb7" [5897e921-e086-496a-8865-2c37fd8ea3bd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 10:32:36.676019  218168 system_pods.go:61] "registry-5cc95cd69-r6zm6" [80b40373-c14b-4d26-ba1f-d0eab35d8a56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 10:32:36.676024  218168 system_pods.go:61] "registry-proxy-qk5tx" [302e1efd-762f-487a-96d5-b24b982f648f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 10:32:36.676029  218168 system_pods.go:61] "snapshot-controller-56fcc65765-7cfdv" [9413102a-ad1a-4ef5-b4fa-ab7380a28148] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 10:32:36.676036  218168 system_pods.go:61] "snapshot-controller-56fcc65765-gbbxw" [080024e6-a3d1-4134-87ff-75521de39601] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 10:32:36.676039  218168 system_pods.go:61] "storage-provisioner" [f10c5c32-c4e9-4810-91a1-603c2cff9bde] Running
	I1216 10:32:36.676047  218168 system_pods.go:74] duration metric: took 151.305953ms to wait for pod list to return data ...
	I1216 10:32:36.676057  218168 default_sa.go:34] waiting for default service account to be created ...
	I1216 10:32:36.830216  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:36.870936  218168 default_sa.go:45] found service account: "default"
	I1216 10:32:36.870964  218168 default_sa.go:55] duration metric: took 194.901031ms for default service account to be created ...
	I1216 10:32:36.870974  218168 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 10:32:36.932075  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:37.037042  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:37.037287  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:37.080533  218168 system_pods.go:86] 18 kube-system pods found
	I1216 10:32:37.080568  218168 system_pods.go:89] "amd-gpu-device-plugin-5mpr5" [af2c6f6b-1b17-4d42-8958-26458e2900e4] Running
	I1216 10:32:37.080577  218168 system_pods.go:89] "coredns-7c65d6cfc9-n8thf" [dc914613-3264-4abb-8a01-5194512e0048] Running
	I1216 10:32:37.080587  218168 system_pods.go:89] "csi-hostpath-attacher-0" [ade3b0d6-039c-4252-be2d-5f4ce1376484] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 10:32:37.080597  218168 system_pods.go:89] "csi-hostpath-resizer-0" [3d3fd497-6347-4f6b-8e24-bda139222416] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 10:32:37.080607  218168 system_pods.go:89] "csi-hostpathplugin-6mvv7" [562bd994-9c64-4aaf-9ebf-9e8a574500d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 10:32:37.080615  218168 system_pods.go:89] "etcd-addons-020871" [9c41352e-5426-4d0a-beec-337bdcd099e7] Running
	I1216 10:32:37.080622  218168 system_pods.go:89] "kube-apiserver-addons-020871" [c040845d-a416-4e47-8810-49f88b739d44] Running
	I1216 10:32:37.080633  218168 system_pods.go:89] "kube-controller-manager-addons-020871" [f1e17c78-352b-498b-b5a7-e74421fb61c8] Running
	I1216 10:32:37.080645  218168 system_pods.go:89] "kube-ingress-dns-minikube" [a4893ee0-36ca-4cb9-a751-c2ffdd5daf75] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 10:32:37.080654  218168 system_pods.go:89] "kube-proxy-n22fm" [963da550-43ba-48fd-8b0e-76fc08650c48] Running
	I1216 10:32:37.080661  218168 system_pods.go:89] "kube-scheduler-addons-020871" [49126848-7cf8-4f5a-acde-98a68986ee26] Running
	I1216 10:32:37.080673  218168 system_pods.go:89] "metrics-server-84c5f94fbc-lk9mr" [fe81a6d6-63fe-417e-9b7d-9047da33acbf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 10:32:37.080688  218168 system_pods.go:89] "nvidia-device-plugin-daemonset-z7nb7" [5897e921-e086-496a-8865-2c37fd8ea3bd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 10:32:37.080701  218168 system_pods.go:89] "registry-5cc95cd69-r6zm6" [80b40373-c14b-4d26-ba1f-d0eab35d8a56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 10:32:37.080713  218168 system_pods.go:89] "registry-proxy-qk5tx" [302e1efd-762f-487a-96d5-b24b982f648f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 10:32:37.080725  218168 system_pods.go:89] "snapshot-controller-56fcc65765-7cfdv" [9413102a-ad1a-4ef5-b4fa-ab7380a28148] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 10:32:37.080739  218168 system_pods.go:89] "snapshot-controller-56fcc65765-gbbxw" [080024e6-a3d1-4134-87ff-75521de39601] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 10:32:37.080749  218168 system_pods.go:89] "storage-provisioner" [f10c5c32-c4e9-4810-91a1-603c2cff9bde] Running
	I1216 10:32:37.080762  218168 system_pods.go:126] duration metric: took 209.781016ms to wait for k8s-apps to be running ...
	I1216 10:32:37.080775  218168 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 10:32:37.080836  218168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:32:37.099748  218168 system_svc.go:56] duration metric: took 18.965052ms WaitForService to wait for kubelet
	I1216 10:32:37.099785  218168 kubeadm.go:582] duration metric: took 12.218112212s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 10:32:37.099814  218168 node_conditions.go:102] verifying NodePressure condition ...
	I1216 10:32:37.273323  218168 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 10:32:37.273362  218168 node_conditions.go:123] node cpu capacity is 2
	I1216 10:32:37.273375  218168 node_conditions.go:105] duration metric: took 173.556511ms to run NodePressure ...
	I1216 10:32:37.273389  218168 start.go:241] waiting for startup goroutines ...
	I1216 10:32:37.330378  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:37.386789  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:37.534266  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:37.535524  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:37.829546  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:37.887194  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:38.036328  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:38.036740  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:38.329894  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:38.386582  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:38.536275  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:38.536752  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:38.830684  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:38.886779  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:39.035649  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:39.035794  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:39.329892  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:39.387017  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:39.534681  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:39.535523  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:40.113833  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:40.114657  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:40.114966  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:40.115296  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:40.330007  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:40.387607  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:40.535487  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:40.536259  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:40.829711  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:40.886686  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:41.036590  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:41.037063  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:41.330399  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:41.387679  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:41.536148  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:41.536202  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:41.829082  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:41.887031  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:42.035983  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:42.036029  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:42.332985  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:42.387755  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:42.537492  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:42.537705  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:42.830367  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:42.888620  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:43.036529  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:43.037917  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:43.329508  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:43.386548  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:43.535520  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:43.536127  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:43.830321  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:43.887616  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:44.036117  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:44.036319  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:44.329983  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:44.387852  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:44.536377  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:44.537145  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:44.829975  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:44.886919  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:45.036071  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:45.036441  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:45.330270  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:45.388217  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:45.535309  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:45.537293  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:45.829984  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:45.887804  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:46.036525  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:46.036994  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:46.329944  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:46.387474  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:46.535579  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:46.536143  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:46.829848  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:46.886419  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:47.035816  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:47.036626  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:47.329234  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:47.388097  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:47.535576  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:47.535825  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:47.900283  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:47.901206  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:48.034582  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:48.035090  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:48.329770  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:48.387163  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:48.535885  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:48.536380  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:48.829718  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:48.886283  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:49.037734  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:49.037910  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:49.330307  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:49.387924  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:49.536068  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:49.536078  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:49.830135  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:49.887186  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:50.034824  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:50.035346  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:50.616533  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:50.616682  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:50.616920  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:50.617102  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:50.829362  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:50.886855  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:51.034747  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:51.035840  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:51.330178  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:51.387056  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:51.535755  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:51.536900  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:51.830558  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:51.887087  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:52.036069  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:52.036292  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:52.598060  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:52.598089  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:52.598221  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:52.598719  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:52.830252  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:52.887635  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:53.035614  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:53.035818  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:53.330109  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:53.387643  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:53.536111  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:53.536391  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:53.832841  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:53.888169  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:54.035815  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:54.036365  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:54.331019  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:54.387651  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:54.536126  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:54.536191  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:54.830887  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:54.931692  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:55.035649  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:55.036125  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:55.330061  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:55.387329  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:55.535711  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:55.537046  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:55.829600  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:55.887797  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:56.035931  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:56.035948  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:56.330806  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:56.387500  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:56.539252  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:56.539323  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:56.830707  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:56.887242  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:57.035700  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:57.037236  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:57.331439  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:57.388564  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:57.535445  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:57.535761  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:57.829929  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:57.887486  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:58.035292  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:58.035884  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:58.331158  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:58.387314  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:58.535869  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:58.536285  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:58.830842  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:58.887743  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:59.036481  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:59.036644  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:59.329562  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:59.386442  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:59.536407  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:59.536790  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:59.830243  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:59.887809  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:00.035409  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:00.035652  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:00.330860  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:00.386865  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:00.535901  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:00.536290  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:00.830375  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:00.888033  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:01.035365  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:01.035582  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:01.329686  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:01.387135  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:01.536102  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:01.536815  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:01.829419  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:01.887746  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:02.036538  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:02.036766  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:02.329216  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:02.387399  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:02.535638  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:02.536559  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:02.829869  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:02.887587  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:03.035840  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:03.035944  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:03.330930  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:03.387120  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:03.535775  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:03.536160  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:03.830271  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:03.887726  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:04.035750  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:04.036552  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:04.330417  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:04.387978  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:04.536532  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:04.536646  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:04.834685  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:04.937614  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:05.035208  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:05.037210  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:05.330457  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:05.387967  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:05.536286  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:05.536433  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:05.829436  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:05.889379  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:06.036549  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:06.037124  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:06.331154  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:06.387565  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:06.535793  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:06.536120  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:06.830203  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:06.887031  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:07.036105  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:07.036323  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:07.330634  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:07.386810  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:07.536880  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:07.537016  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:07.829321  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:07.887527  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:08.034753  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:08.034908  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:08.330824  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:08.387880  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:08.534695  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:08.536114  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:08.830305  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:08.887577  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:09.035231  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:09.036647  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:09.329783  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:09.386853  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:09.534987  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:09.535185  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:09.830687  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:09.887460  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:10.036233  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:10.036294  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:10.329677  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:10.386384  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:10.535996  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:10.536287  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:10.830314  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:10.887743  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:11.036128  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:11.037326  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:11.329916  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:11.386927  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:11.535143  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:11.535401  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:11.830347  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:11.887864  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:12.035403  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:12.035655  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:12.329956  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:12.388783  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:12.535385  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:12.537070  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:12.830538  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:12.886721  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:13.035267  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:13.036020  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:13.329937  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:13.387652  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:13.535861  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:13.537845  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:13.831262  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:13.932883  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:14.036036  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:14.036151  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:14.330819  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:14.386770  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:14.534840  218168 kapi.go:107] duration metric: took 41.004107361s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 10:33:14.535009  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:14.830003  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:14.887869  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:15.036440  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:15.330598  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:15.386491  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:15.535959  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:15.829380  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:15.887931  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:16.036099  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:16.330536  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:16.386436  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:16.535919  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:16.830551  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:16.887325  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:17.035754  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:17.330479  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:17.386718  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:17.535020  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:17.830882  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:17.887941  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:18.035369  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:18.329957  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:18.387072  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:18.535680  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:18.830376  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:18.887954  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:19.035739  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:19.330017  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:19.387797  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:19.537518  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:19.829417  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:19.887297  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:20.035209  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:20.329853  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:20.387064  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:20.537404  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:20.830193  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:20.887331  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:21.035848  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:21.329292  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:21.386952  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:21.535188  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:21.830346  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:21.887306  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:22.035931  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:22.347897  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:22.402692  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:22.536648  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:23.066701  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:23.068565  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:23.069560  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:23.342543  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:23.445525  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:23.544494  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:23.829417  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:23.888911  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:24.036682  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:24.338526  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:24.436478  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:24.536371  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:24.830941  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:24.889040  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:25.046831  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:25.339416  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:25.388019  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:25.535557  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:25.829703  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:25.887879  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:26.038458  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:26.330328  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:26.390241  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:26.537659  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:26.830630  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:26.886266  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:27.036020  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:27.331559  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:27.386596  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:27.535713  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:27.830347  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:27.887126  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:28.035583  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:28.330890  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:28.386503  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:28.537823  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:28.830322  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:28.887734  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:29.461220  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:29.462333  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:29.465523  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:29.556294  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:29.830115  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:29.887594  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:30.035551  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:30.332343  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:30.387077  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:30.540328  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:30.829813  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:30.887370  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:31.039561  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:31.331982  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:31.389554  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:31.536446  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:31.829591  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:31.887320  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:32.035813  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:32.329906  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:32.387404  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:32.535456  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:32.831016  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:32.888230  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:33.034860  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:33.330404  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:33.386830  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:33.541133  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:33.830270  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:33.888542  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:34.036805  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:34.329696  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:34.386572  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:34.536117  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:34.830238  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:34.886905  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:35.035310  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:35.330330  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:35.388368  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:35.538994  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:35.829870  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:35.887462  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:36.035194  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:36.330070  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:36.400227  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:36.535785  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:36.830446  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:36.886451  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:37.040934  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:37.330140  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:37.387314  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:37.535631  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:37.830075  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:37.932312  218168 kapi.go:107] duration metric: took 1m3.050214546s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 10:33:38.036092  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:38.329868  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:38.536465  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:38.830454  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:39.035496  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:39.330159  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:39.536766  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:39.830697  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:40.037326  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:40.332299  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:40.535191  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:40.830799  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:41.102276  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:41.329281  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:41.535324  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:41.830067  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:42.035160  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:42.330128  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:42.535174  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:42.831898  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:43.250624  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:43.330276  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:43.535618  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:43.830087  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:44.036308  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:44.338625  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:44.537149  218168 kapi.go:107] duration metric: took 1m11.006418061s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 10:33:44.831214  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:45.330333  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:45.830019  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:46.329410  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:46.831549  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:47.335062  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:47.831204  218168 kapi.go:107] duration metric: took 1m11.505269435s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 10:33:47.834114  218168 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-020871 cluster.
	I1216 10:33:47.835432  218168 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 10:33:47.836733  218168 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 10:33:47.837932  218168 out.go:177] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, metrics-server, ingress-dns, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1216 10:33:47.839248  218168 addons.go:510] duration metric: took 1m22.957522658s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner nvidia-device-plugin cloud-spanner storage-provisioner-rancher metrics-server ingress-dns inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1216 10:33:47.839287  218168 start.go:246] waiting for cluster config update ...
	I1216 10:33:47.839312  218168 start.go:255] writing updated cluster config ...
	I1216 10:33:47.839625  218168 ssh_runner.go:195] Run: rm -f paused
	I1216 10:33:47.891556  218168 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1216 10:33:47.893368  218168 out.go:177] * Done! kubectl is now configured to use "addons-020871" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.064488667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58f60e39-ad46-43a7-b495-d6747d596036 name=/runtime.v1.RuntimeService/Version
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.065399346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b98a900-8fef-4cd4-89b6-df23eea7ac29 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.066530497Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345420066508329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b98a900-8fef-4cd4-89b6-df23eea7ac29 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.067071357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0756dc4e-13fa-4241-be5f-43216a1635e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.067131856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0756dc4e-13fa-4241-be5f-43216a1635e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.067444332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e81550007ffca59cb90023d3a841cff89d7da6ce38969537258c163a8f348f7,PodSandboxId:be62c474518c58e88907df6aa4a41b08f52bcae3de431c69898ba171788dc081,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734345282010385018,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a92e7de-8018-453e-9698-b51f8a038f3a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bb8b76bd3dade77077c6898235f291741d4426a332d2824fe0e0b886242920,PodSandboxId:ea109ac67ead1b8aad4c0bf488f0b24bc55e4b272274734831daf7b4d284df94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734345231080299961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8dbdf275-6cac-47d9-a5f1-d03fff3bb404,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033922d5d5846d638260076a231caf4f11bd7818bf32fc7fc375b3a200d43b5,PodSandboxId:c6dfaf7164af5cc219fc50274bc28271ae862a1921dd11288bd3d9eeb1c9fefc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734345224071986071,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-dt2gx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8399a8f-fd6c-4fa1-9548-127b14d5e96a,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:944b06a2fc9b508300ed1fef506307c0d55c4c1259ee839fd8e06486311fe6a8,PodSandboxId:de0a74c84fe1f027f7487fd71f955f6bed17b1ba204660552cf035986594f974,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1734345205332677207,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-plw8v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a47a5b8a-cdc8-4830-b779-8add294d86d0,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1835ee4863054cc3ab3b4707317361993256acf7f93ac1be721d2e18f1755bb9,PodSandboxId:da59fd91a0119a9d97fd62e4bae0218db8dfb521d0c76ee64506e00c81eaf7ce,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734345205030705687,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xzbkq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f4d0e9f7-2949-4ec5-9a16-4e08788e1612,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7cf5993f5adc58e7cb62f9fad07598cc030b3817a95bb2022871b613088447,PodSandboxId:8d42d70ae34d0c58a47a91721f96b845ed1d83eb39005e1c4cf52c3ea544d286,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734345184790464453,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-lk9mr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe81a6d6-63fe-417e-9b7d-9047da33acbf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3dac3cede3d30ebd87f3711e892cfbc049865beb359cc4b0ebef83a352a3db,PodSandboxId:a513e6d603e10d610a8732ee4e4aaeddbb8a265c9b956d872034039bf9349584,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734345162419360065,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4893ee0-36ca-4cb9-a751-c2ffdd5daf75,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a88ef18bd59c12fe8d8126f9052512d6ebc875d7a9b6fbb23dfede1f07f16c7,PodSandboxId:c34856c7d42d737188108ee34b0e8af158da2ae7bfc627623f79bad62
2b1e413,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734345155347414546,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5mpr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2c6f6b-1b17-4d42-8958-26458e2900e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13387074acfc8da98881a56f788a74f50b35d7b15a8a577aadf6fbd2ec6f5bc,PodSandboxId:b6bf6b720488dd49e
1d51dbc47f5e7e165ba61e1399039f9a36ce84aa3fe6b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734345151575789575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f10c5c32-c4e9-4810-91a1-603c2cff9bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad21e51d9468859792eca6c6c12a6b98daa214025f3a86eb245d9983fb3f7f87,PodSandboxId:8a1ea18f44a1e99771a2a3465afc7
8b6a2ba06f97b77af59ebf47e8a5d2201d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734345148129253008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8thf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc914613-3264-4abb-8a01-5194512e0048,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7ddbc1370793d8ad1ad8944388f73158a264ea36736a0363d5182157769b25,PodSandboxId:5ab34b9cea28b9b97707e833d155151a6ae2ee22867fe8be5f5239850bbe3343,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734345145743338549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n22fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 963da550-43ba-48fd-8b0e-76fc08650c48,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:2f038fc8e06f16345e0d02b668a881e146e6df0a49bcf18d1d780a3e10228a2b,PodSandboxId:3c727e0b159b2aee986f6ce329480f535a0e916c3e461e2c708f79ce77eb3817,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734345134279958825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeed55f4344b1bf03925a77c9c485375,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:706427e1fde240f1832ff7654149b844a15d1e088423c1a75ed22d9c2674fd2a,PodSandboxId:054f89d61daae9d1ba33f7e69994ef287decf84dd7eaf00093ae0285c0c63396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734345134276277718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8538ef128b74f57b4a6873d0dd11ee1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:39e44b7374d0a0030f2b9f2558e0d43d472aa4af29731b9052197cee54e71e12,PodSandboxId:bf7bed144b22198ecadae4db98844dc97f8ded9d3e7f85e4418bb64e24403118,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734345134270975940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5986cac6d125a4abfa620919bf8afc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:876c92f4c339793a5d45c1c3831f7ef86d9bc7581b6c98dc51bac41824ba98ea,PodSandboxId:488f268bc1ec01d9551bed7d8229a6890ac8743348d74006b57de173c97818db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1734345134223155861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac9d63d2c2dcc1b75f2fa8a1272267c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=0756dc4e-13fa-4241-be5f-43216a1635e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.101275633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c804ee5-b212-4e25-9996-354b7742dc4a name=/runtime.v1.RuntimeService/Version
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.101364001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c804ee5-b212-4e25-9996-354b7742dc4a name=/runtime.v1.RuntimeService/Version
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.102380579Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b15a0f6-7a62-466d-98f4-1d8a260d9ded name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.103674018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345420103648356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b15a0f6-7a62-466d-98f4-1d8a260d9ded name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.104549771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=627bcec7-1c2e-4df7-9394-fdc63096798e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.104646635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=627bcec7-1c2e-4df7-9394-fdc63096798e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.105151851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e81550007ffca59cb90023d3a841cff89d7da6ce38969537258c163a8f348f7,PodSandboxId:be62c474518c58e88907df6aa4a41b08f52bcae3de431c69898ba171788dc081,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734345282010385018,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a92e7de-8018-453e-9698-b51f8a038f3a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bb8b76bd3dade77077c6898235f291741d4426a332d2824fe0e0b886242920,PodSandboxId:ea109ac67ead1b8aad4c0bf488f0b24bc55e4b272274734831daf7b4d284df94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734345231080299961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8dbdf275-6cac-47d9-a5f1-d03fff3bb404,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033922d5d5846d638260076a231caf4f11bd7818bf32fc7fc375b3a200d43b5,PodSandboxId:c6dfaf7164af5cc219fc50274bc28271ae862a1921dd11288bd3d9eeb1c9fefc,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734345224071986071,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-dt2gx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e8399a8f-fd6c-4fa1-9548-127b14d5e96a,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:944b06a2fc9b508300ed1fef506307c0d55c4c1259ee839fd8e06486311fe6a8,PodSandboxId:de0a74c84fe1f027f7487fd71f955f6bed17b1ba204660552cf035986594f974,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1734345205332677207,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-plw8v,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a47a5b8a-cdc8-4830-b779-8add294d86d0,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1835ee4863054cc3ab3b4707317361993256acf7f93ac1be721d2e18f1755bb9,PodSandboxId:da59fd91a0119a9d97fd62e4bae0218db8dfb521d0c76ee64506e00c81eaf7ce,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734345205030705687,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xzbkq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f4d0e9f7-2949-4ec5-9a16-4e08788e1612,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7cf5993f5adc58e7cb62f9fad07598cc030b3817a95bb2022871b613088447,PodSandboxId:8d42d70ae34d0c58a47a91721f96b845ed1d83eb39005e1c4cf52c3ea544d286,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734345184790464453,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-lk9mr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe81a6d6-63fe-417e-9b7d-9047da33acbf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3dac3cede3d30ebd87f3711e892cfbc049865beb359cc4b0ebef83a352a3db,PodSandboxId:a513e6d603e10d610a8732ee4e4aaeddbb8a265c9b956d872034039bf9349584,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns
@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734345162419360065,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4893ee0-36ca-4cb9-a751-c2ffdd5daf75,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a88ef18bd59c12fe8d8126f9052512d6ebc875d7a9b6fbb23dfede1f07f16c7,PodSandboxId:c34856c7d42d737188108ee34b0e8af158da2ae7bfc627623f79bad62
2b1e413,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734345155347414546,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5mpr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2c6f6b-1b17-4d42-8958-26458e2900e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13387074acfc8da98881a56f788a74f50b35d7b15a8a577aadf6fbd2ec6f5bc,PodSandboxId:b6bf6b720488dd49e
1d51dbc47f5e7e165ba61e1399039f9a36ce84aa3fe6b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734345151575789575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f10c5c32-c4e9-4810-91a1-603c2cff9bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad21e51d9468859792eca6c6c12a6b98daa214025f3a86eb245d9983fb3f7f87,PodSandboxId:8a1ea18f44a1e99771a2a3465afc7
8b6a2ba06f97b77af59ebf47e8a5d2201d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734345148129253008,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8thf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc914613-3264-4abb-8a01-5194512e0048,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termin
ationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7ddbc1370793d8ad1ad8944388f73158a264ea36736a0363d5182157769b25,PodSandboxId:5ab34b9cea28b9b97707e833d155151a6ae2ee22867fe8be5f5239850bbe3343,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734345145743338549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n22fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 963da550-43ba-48fd-8b0e-76fc08650c48,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuber
netes.pod.terminationGracePeriod: 30,},},&Container{Id:2f038fc8e06f16345e0d02b668a881e146e6df0a49bcf18d1d780a3e10228a2b,PodSandboxId:3c727e0b159b2aee986f6ce329480f535a0e916c3e461e2c708f79ce77eb3817,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734345134279958825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeed55f4344b1bf03925a77c9c485375,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:706427e1fde240f1832ff7654149b844a15d1e088423c1a75ed22d9c2674fd2a,PodSandboxId:054f89d61daae9d1ba33f7e69994ef287decf84dd7eaf00093ae0285c0c63396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734345134276277718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8538ef128b74f57b4a6873d0dd11ee1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,
},},&Container{Id:39e44b7374d0a0030f2b9f2558e0d43d472aa4af29731b9052197cee54e71e12,PodSandboxId:bf7bed144b22198ecadae4db98844dc97f8ded9d3e7f85e4418bb64e24403118,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734345134270975940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5986cac6d125a4abfa620919bf8afc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id
:876c92f4c339793a5d45c1c3831f7ef86d9bc7581b6c98dc51bac41824ba98ea,PodSandboxId:488f268bc1ec01d9551bed7d8229a6890ac8743348d74006b57de173c97818db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1734345134223155861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac9d63d2c2dcc1b75f2fa8a1272267c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/inte
rceptors.go:74" id=627bcec7-1c2e-4df7-9394-fdc63096798e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.119922451Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.v2+json\"" file="docker/docker_client.go:964"
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.120102948Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kicbase/echo-server:1.0\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.121017632Z" level=debug msg="Using registries.d directory /etc/containers/registries.d" file="docker/registries_d.go:80"
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.121101498Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\"" file="docker/docker_image_src.go:87"
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.121141913Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /run/containers/0/auth.json" file="config/config.go:846"
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.121188107Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.config/containers/auth.json" file="config/config.go:846"
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.121218105Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.docker/config.json" file="config/config.go:846"
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.121247037Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.dockercfg" file="config/config.go:846"
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.121269048Z" level=debug msg="No credentials for docker.io/kicbase/echo-server found" file="config/config.go:272"
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.121297444Z" level=debug msg=" No signature storage configuration found for docker.io/kicbase/echo-server:1.0, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.121344313Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Dec 16 10:37:00 addons-020871 crio[661]: time="2024-12-16 10:37:00.121384214Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e81550007ffc       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   be62c474518c5       nginx
	c7bb8b76bd3da       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   ea109ac67ead1       busybox
	2033922d5d584       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   c6dfaf7164af5       ingress-nginx-controller-5f85ff4588-dt2gx
	944b06a2fc9b5       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   de0a74c84fe1f       ingress-nginx-admission-patch-plw8v
	1835ee4863054       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   da59fd91a0119       ingress-nginx-admission-create-xzbkq
	3d7cf5993f5ad       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        3 minutes ago       Running             metrics-server            0                   8d42d70ae34d0       metrics-server-84c5f94fbc-lk9mr
	2f3dac3cede3d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   a513e6d603e10       kube-ingress-dns-minikube
	7a88ef18bd59c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   c34856c7d42d7       amd-gpu-device-plugin-5mpr5
	f13387074acfc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   b6bf6b720488d       storage-provisioner
	ad21e51d94688       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   8a1ea18f44a1e       coredns-7c65d6cfc9-n8thf
	6d7ddbc137079       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             4 minutes ago       Running             kube-proxy                0                   5ab34b9cea28b       kube-proxy-n22fm
	2f038fc8e06f1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago       Running             etcd                      0                   3c727e0b159b2       etcd-addons-020871
	706427e1fde24       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             4 minutes ago       Running             kube-controller-manager   0                   054f89d61daae       kube-controller-manager-addons-020871
	39e44b7374d0a       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             4 minutes ago       Running             kube-scheduler            0                   bf7bed144b221       kube-scheduler-addons-020871
	876c92f4c3397       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             4 minutes ago       Running             kube-apiserver            0                   488f268bc1ec0       kube-apiserver-addons-020871
	
	
	==> coredns [ad21e51d9468859792eca6c6c12a6b98daa214025f3a86eb245d9983fb3f7f87] <==
	[INFO] 10.244.0.8:42545 - 12236 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000200407s
	[INFO] 10.244.0.8:42545 - 55433 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000083842s
	[INFO] 10.244.0.8:42545 - 35620 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00007963s
	[INFO] 10.244.0.8:42545 - 65148 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000072755s
	[INFO] 10.244.0.8:42545 - 33403 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000035632s
	[INFO] 10.244.0.8:42545 - 44915 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000082085s
	[INFO] 10.244.0.8:42545 - 58076 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000084852s
	[INFO] 10.244.0.8:57763 - 32621 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000159166s
	[INFO] 10.244.0.8:57763 - 32949 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000363751s
	[INFO] 10.244.0.8:33619 - 59089 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000093128s
	[INFO] 10.244.0.8:33619 - 59299 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000175087s
	[INFO] 10.244.0.8:41620 - 4813 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000041687s
	[INFO] 10.244.0.8:41620 - 5021 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103454s
	[INFO] 10.244.0.8:48439 - 37361 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000113097s
	[INFO] 10.244.0.8:48439 - 37556 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000209623s
	[INFO] 10.244.0.23:59394 - 65390 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000433763s
	[INFO] 10.244.0.23:35456 - 20986 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000559095s
	[INFO] 10.244.0.23:36861 - 40141 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096902s
	[INFO] 10.244.0.23:41385 - 3812 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136455s
	[INFO] 10.244.0.23:39316 - 31990 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099434s
	[INFO] 10.244.0.23:51606 - 12422 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086639s
	[INFO] 10.244.0.23:53671 - 17489 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001146295s
	[INFO] 10.244.0.23:34639 - 62270 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.000992981s
	[INFO] 10.244.0.26:38057 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000416725s
	[INFO] 10.244.0.26:35943 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134101s
	
	
	==> describe nodes <==
	Name:               addons-020871
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-020871
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8
	                    minikube.k8s.io/name=addons-020871
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T10_32_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-020871
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 10:32:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-020871
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 10:36:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 10:35:23 +0000   Mon, 16 Dec 2024 10:32:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 10:35:23 +0000   Mon, 16 Dec 2024 10:32:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 10:35:23 +0000   Mon, 16 Dec 2024 10:32:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 10:35:23 +0000   Mon, 16 Dec 2024 10:32:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    addons-020871
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 549475808f404566b3d2b6af38f9bae5
	  System UUID:                54947580-8f40-4566-b3d2-b6af38f9bae5
	  Boot ID:                    420edaf2-6fbd-459e-9928-8db34caeabe6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	  default                     hello-world-app-55bf9c44b4-ckdlt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-dt2gx    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m27s
	  kube-system                 amd-gpu-device-plugin-5mpr5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 coredns-7c65d6cfc9-n8thf                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m36s
	  kube-system                 etcd-addons-020871                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m41s
	  kube-system                 kube-apiserver-addons-020871                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-controller-manager-addons-020871        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-proxy-n22fm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-scheduler-addons-020871                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 metrics-server-84c5f94fbc-lk9mr              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m33s  kube-proxy       
	  Normal  Starting                 4m41s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m41s  kubelet          Node addons-020871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s  kubelet          Node addons-020871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s  kubelet          Node addons-020871 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m40s  kubelet          Node addons-020871 status is now: NodeReady
	  Normal  RegisteredNode           4m37s  node-controller  Node addons-020871 event: Registered Node addons-020871 in Controller
	
	
	==> dmesg <==
	[  +0.056474] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.982553] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.077436] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.841575] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +0.152148] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.210190] kauditd_printk_skb: 117 callbacks suppressed
	[  +5.021233] kauditd_printk_skb: 152 callbacks suppressed
	[  +7.051390] kauditd_printk_skb: 67 callbacks suppressed
	[Dec16 10:33] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.426656] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.833011] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.054636] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.086667] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.075112] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.130070] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.841732] kauditd_printk_skb: 7 callbacks suppressed
	[Dec16 10:34] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.153240] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.653133] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.133504] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.092868] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.358953] kauditd_printk_skb: 16 callbacks suppressed
	[ +10.377541] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 10:35] kauditd_printk_skb: 21 callbacks suppressed
	[ +31.813058] kauditd_printk_skb: 76 callbacks suppressed
	
	
	==> etcd [2f038fc8e06f16345e0d02b668a881e146e6df0a49bcf18d1d780a3e10228a2b] <==
	{"level":"warn","ts":"2024-12-16T10:33:29.440814Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T10:33:28.923583Z","time spent":"517.183211ms","remote":"127.0.0.1:38202","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4457,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" mod_revision:703 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" value_size:4391 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" > >"}
	{"level":"warn","ts":"2024-12-16T10:33:29.442100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.761478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-12-16T10:33:29.442151Z","caller":"traceutil/trace.go:171","msg":"trace[1703560266] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1010; }","duration":"176.812985ms","start":"2024-12-16T10:33:29.265327Z","end":"2024-12-16T10:33:29.442140Z","steps":["trace[1703560266] 'agreement among raft nodes before linearized reading'  (duration: 176.697202ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:29.442357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.475767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:29.442394Z","caller":"traceutil/trace.go:171","msg":"trace[783107619] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1010; }","duration":"123.51582ms","start":"2024-12-16T10:33:29.318872Z","end":"2024-12-16T10:33:29.442388Z","steps":["trace[783107619] 'agreement among raft nodes before linearized reading'  (duration: 123.469098ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:29.442466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.016884ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:29.442497Z","caller":"traceutil/trace.go:171","msg":"trace[390738922] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1010; }","duration":"139.050916ms","start":"2024-12-16T10:33:29.303440Z","end":"2024-12-16T10:33:29.442491Z","steps":["trace[390738922] 'agreement among raft nodes before linearized reading'  (duration: 139.011749ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:33:41.087723Z","caller":"traceutil/trace.go:171","msg":"trace[1465388956] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"113.670692ms","start":"2024-12-16T10:33:40.974034Z","end":"2024-12-16T10:33:41.087705Z","steps":["trace[1465388956] 'process raft request'  (duration: 113.307218ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:43.237535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.02215ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:43.237681Z","caller":"traceutil/trace.go:171","msg":"trace[783137280] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1090; }","duration":"224.186702ms","start":"2024-12-16T10:33:43.013481Z","end":"2024-12-16T10:33:43.237668Z","steps":["trace[783137280] 'range keys from in-memory index tree'  (duration: 223.920529ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:43.237676Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.908229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:43.237804Z","caller":"traceutil/trace.go:171","msg":"trace[835467499] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1090; }","duration":"215.043181ms","start":"2024-12-16T10:33:43.022752Z","end":"2024-12-16T10:33:43.237795Z","steps":["trace[835467499] 'range keys from in-memory index tree'  (duration: 214.86284ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:34:14.572715Z","caller":"traceutil/trace.go:171","msg":"trace[1147357303] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"178.210654ms","start":"2024-12-16T10:34:14.394489Z","end":"2024-12-16T10:34:14.572700Z","steps":["trace[1147357303] 'process raft request'  (duration: 178.029069ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:34:24.028858Z","caller":"traceutil/trace.go:171","msg":"trace[1410742977] linearizableReadLoop","detail":"{readStateIndex:1333; appliedIndex:1332; }","duration":"253.650344ms","start":"2024-12-16T10:34:23.775193Z","end":"2024-12-16T10:34:24.028843Z","steps":["trace[1410742977] 'read index received'  (duration: 253.537438ms)","trace[1410742977] 'applied index is now lower than readState.Index'  (duration: 112.394µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:34:24.028949Z","caller":"traceutil/trace.go:171","msg":"trace[694862068] transaction","detail":"{read_only:false; response_revision:1294; number_of_response:1; }","duration":"436.23531ms","start":"2024-12-16T10:34:23.592708Z","end":"2024-12-16T10:34:24.028943Z","steps":["trace[694862068] 'process raft request'  (duration: 436.031154ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:34:24.029030Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T10:34:23.592689Z","time spent":"436.278761ms","remote":"127.0.0.1:38216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1285 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-12-16T10:34:24.029189Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.905067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:1 size:3395"}
	{"level":"info","ts":"2024-12-16T10:34:24.029228Z","caller":"traceutil/trace.go:171","msg":"trace[850312144] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:1; response_revision:1294; }","duration":"251.950447ms","start":"2024-12-16T10:34:23.777269Z","end":"2024-12-16T10:34:24.029219Z","steps":["trace[850312144] 'agreement among raft nodes before linearized reading'  (duration: 251.837753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:34:24.029406Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.365448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:883"}
	{"level":"info","ts":"2024-12-16T10:34:24.029437Z","caller":"traceutil/trace.go:171","msg":"trace[1343194805] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:1294; }","duration":"192.398792ms","start":"2024-12-16T10:34:23.837032Z","end":"2024-12-16T10:34:24.029431Z","steps":["trace[1343194805] 'agreement among raft nodes before linearized reading'  (duration: 192.316861ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:34:24.029559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.364555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-12-16T10:34:24.029573Z","caller":"traceutil/trace.go:171","msg":"trace[1084207642] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1294; }","duration":"254.380329ms","start":"2024-12-16T10:34:23.775188Z","end":"2024-12-16T10:34:24.029569Z","steps":["trace[1084207642] 'agreement among raft nodes before linearized reading'  (duration: 254.330675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:34:24.029710Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.656876ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-12-16T10:34:24.029782Z","caller":"traceutil/trace.go:171","msg":"trace[1860626495] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1294; }","duration":"237.733849ms","start":"2024-12-16T10:34:23.792040Z","end":"2024-12-16T10:34:24.029774Z","steps":["trace[1860626495] 'agreement among raft nodes before linearized reading'  (duration: 237.598653ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:34:36.364177Z","caller":"traceutil/trace.go:171","msg":"trace[1382503755] transaction","detail":"{read_only:false; response_revision:1408; number_of_response:1; }","duration":"227.479933ms","start":"2024-12-16T10:34:36.136669Z","end":"2024-12-16T10:34:36.364149Z","steps":["trace[1382503755] 'process raft request'  (duration: 227.174049ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:37:00 up 5 min,  0 users,  load average: 0.22, 0.92, 0.51
	Linux addons-020871 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [876c92f4c339793a5d45c1c3831f7ef86d9bc7581b6c98dc51bac41824ba98ea] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 10:34:16.047756       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 10:34:16.068219       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1216 10:34:17.865543       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.191.184"}
	I1216 10:34:37.699142       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 10:34:37.907516       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.126.117"}
	I1216 10:34:41.761675       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1216 10:34:42.842372       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1216 10:34:45.573760       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1216 10:35:03.809948       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:35:03.809986       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:35:03.870589       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:35:03.870671       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:35:03.884343       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:35:03.884374       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:35:03.894392       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:35:03.896354       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1216 10:35:04.193382       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W1216 10:35:04.885375       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1216 10:35:05.140330       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1216 10:35:05.140643       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E1216 10:35:19.570109       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1216 10:36:59.002287       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.53.75"}
	
	
	==> kube-controller-manager [706427e1fde240f1832ff7654149b844a15d1e088423c1a75ed22d9c2674fd2a] <==
	E1216 10:35:25.005589       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:35:26.431813       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:35:26.431869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:35:39.121188       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:35:39.121279       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:35:39.289094       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:35:39.289213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:35:39.465021       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:35:39.465141       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1216 10:35:52.005068       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W1216 10:36:10.537051       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:10.537100       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:36:19.850089       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:19.850146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:36:20.258892       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:20.258998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:36:22.795641       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:22.795835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:36:49.054396       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:49.054516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:36:55.254161       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:36:55.254221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1216 10:36:58.831495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="43.257827ms"
	I1216 10:36:58.851529       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="19.986827ms"
	I1216 10:36:58.851644       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="66.57µs"
	
	
	==> kube-proxy [6d7ddbc1370793d8ad1ad8944388f73158a264ea36736a0363d5182157769b25] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 10:32:26.436733       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 10:32:26.453476       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.206"]
	E1216 10:32:26.453523       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 10:32:26.557943       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1216 10:32:26.558020       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 10:32:26.558045       1 server_linux.go:169] "Using iptables Proxier"
	I1216 10:32:26.562854       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 10:32:26.563130       1 server.go:483] "Version info" version="v1.31.2"
	I1216 10:32:26.563142       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 10:32:26.564759       1 config.go:199] "Starting service config controller"
	I1216 10:32:26.564769       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 10:32:26.564785       1 config.go:105] "Starting endpoint slice config controller"
	I1216 10:32:26.564789       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 10:32:26.565128       1 config.go:328] "Starting node config controller"
	I1216 10:32:26.565135       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 10:32:26.665708       1 shared_informer.go:320] Caches are synced for node config
	I1216 10:32:26.665759       1 shared_informer.go:320] Caches are synced for service config
	I1216 10:32:26.665781       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [39e44b7374d0a0030f2b9f2558e0d43d472aa4af29731b9052197cee54e71e12] <==
	W1216 10:32:17.710859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 10:32:17.710997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.716425       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 10:32:17.716530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.723497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 10:32:17.723545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.724727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1216 10:32:17.724816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.734485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 10:32:17.734538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.784991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 10:32:17.785065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.800512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 10:32:17.800572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.850379       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 10:32:17.850713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:18.046883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 10:32:18.046987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:18.057411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 10:32:18.057547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:18.062149       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 10:32:18.062650       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1216 10:32:18.081749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 10:32:18.082369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1216 10:32:20.882828       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 10:36:58 addons-020871 kubelet[1211]: E1216 10:36:58.815800    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae4faab9-2f13-4ab8-b667-d54777af1250" containerName="local-path-provisioner"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: E1216 10:36:58.815806    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="node-driver-registrar"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: E1216 10:36:58.815814    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="liveness-probe"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: E1216 10:36:58.815820    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="csi-provisioner"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: E1216 10:36:58.815826    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="hostpath"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: E1216 10:36:58.815831    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="csi-snapshotter"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: E1216 10:36:58.815837    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="csi-external-health-monitor-controller"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: E1216 10:36:58.815844    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ade3b0d6-039c-4252-be2d-5f4ce1376484" containerName="csi-attacher"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: E1216 10:36:58.815850    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0de024c3-f070-4255-ba7b-a35bff98066c" containerName="helper-pod"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: E1216 10:36:58.815857    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="080024e6-a3d1-4134-87ff-75521de39601" containerName="volume-snapshot-controller"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815887    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="csi-external-health-monitor-controller"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815895    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de024c3-f070-4255-ba7b-a35bff98066c" containerName="helper-pod"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815902    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="csi-snapshotter"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815907    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="liveness-probe"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815913    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="hostpath"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815918    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d3fd497-6347-4f6b-8e24-bda139222416" containerName="csi-resizer"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815923    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade3b0d6-039c-4252-be2d-5f4ce1376484" containerName="csi-attacher"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815928    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="080024e6-a3d1-4134-87ff-75521de39601" containerName="volume-snapshot-controller"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815933    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae4faab9-2f13-4ab8-b667-d54777af1250" containerName="local-path-provisioner"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815937    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="9413102a-ad1a-4ef5-b4fa-ab7380a28148" containerName="volume-snapshot-controller"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815942    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="node-driver-registrar"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.815947    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="562bd994-9c64-4aaf-9ebf-9e8a574500d9" containerName="csi-provisioner"
	Dec 16 10:36:58 addons-020871 kubelet[1211]: I1216 10:36:58.862312    1211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k95s\" (UniqueName: \"kubernetes.io/projected/5e874c3a-c63a-447f-b3f3-1a25182e6e5f-kube-api-access-4k95s\") pod \"hello-world-app-55bf9c44b4-ckdlt\" (UID: \"5e874c3a-c63a-447f-b3f3-1a25182e6e5f\") " pod="default/hello-world-app-55bf9c44b4-ckdlt"
	Dec 16 10:36:59 addons-020871 kubelet[1211]: E1216 10:36:59.568533    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345419568047471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:36:59 addons-020871 kubelet[1211]: E1216 10:36:59.568576    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345419568047471,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f13387074acfc8da98881a56f788a74f50b35d7b15a8a577aadf6fbd2ec6f5bc] <==
	I1216 10:32:32.299444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 10:32:32.580657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 10:32:32.580738       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 10:32:32.892192       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 10:32:32.895888       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-020871_ee801e69-63da-4df8-86b6-62118e25b60b!
	I1216 10:32:32.897592       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c21f86b-177f-4f45-83a9-a66c7cc6f27e", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-020871_ee801e69-63da-4df8-86b6-62118e25b60b became leader
	I1216 10:32:32.996508       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-020871_ee801e69-63da-4df8-86b6-62118e25b60b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-020871 -n addons-020871
helpers_test.go:261: (dbg) Run:  kubectl --context addons-020871 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-ckdlt ingress-nginx-admission-create-xzbkq ingress-nginx-admission-patch-plw8v
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-020871 describe pod hello-world-app-55bf9c44b4-ckdlt ingress-nginx-admission-create-xzbkq ingress-nginx-admission-patch-plw8v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-020871 describe pod hello-world-app-55bf9c44b4-ckdlt ingress-nginx-admission-create-xzbkq ingress-nginx-admission-patch-plw8v: exit status 1 (73.730572ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-ckdlt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-020871/192.168.39.206
	Start Time:       Mon, 16 Dec 2024 10:36:58 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4k95s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4k95s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-ckdlt to addons-020871
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xzbkq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-plw8v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-020871 describe pod hello-world-app-55bf9c44b4-ckdlt ingress-nginx-admission-create-xzbkq ingress-nginx-admission-patch-plw8v: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 addons disable ingress-dns --alsologtostderr -v=1: (1.371573073s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 addons disable ingress --alsologtostderr -v=1: (7.676041512s)
--- FAIL: TestAddons/parallel/Ingress (152.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (361.43s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.421845ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-lk9mr" [fe81a6d6-63fe-417e-9b7d-9047da33acbf] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004979022s
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (66.10061ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 2m8.554947577s

                                                
                                                
** /stderr **
I1216 10:34:34.557361  217519 retry.go:31] will retry after 2.407597063s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (126.082441ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 2m11.088811601s

                                                
                                                
** /stderr **
I1216 10:34:37.091423  217519 retry.go:31] will retry after 2.688102908s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (60.074036ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 2m13.83847574s

                                                
                                                
** /stderr **
I1216 10:34:39.840769  217519 retry.go:31] will retry after 9.071837564s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (67.787594ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 2m22.978141414s

                                                
                                                
** /stderr **
I1216 10:34:48.980681  217519 retry.go:31] will retry after 6.142710188s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (65.201547ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 2m29.187153583s

                                                
                                                
** /stderr **
I1216 10:34:55.189714  217519 retry.go:31] will retry after 12.099540601s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (62.253807ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 2m41.350067361s

                                                
                                                
** /stderr **
I1216 10:35:07.352570  217519 retry.go:31] will retry after 15.332396544s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (63.630518ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 2m56.747745122s

                                                
                                                
** /stderr **
I1216 10:35:22.750290  217519 retry.go:31] will retry after 49.204532249s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (61.510065ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 3m46.017486341s

                                                
                                                
** /stderr **
I1216 10:36:12.019885  217519 retry.go:31] will retry after 1m11.680554796s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (60.573315ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 4m57.76312793s

                                                
                                                
** /stderr **
I1216 10:37:23.765799  217519 retry.go:31] will retry after 1m0.127983249s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (61.103481ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 5m57.957967124s

                                                
                                                
** /stderr **
I1216 10:38:23.960371  217519 retry.go:31] will retry after 1m1.880202294s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (61.392497ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 6m59.899712988s

                                                
                                                
** /stderr **
I1216 10:39:25.902422  217519 retry.go:31] will retry after 1m2.393676732s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-020871 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-020871 top pods -n kube-system: exit status 1 (59.669711ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-5mpr5, age: 8m2.353999011s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-020871 -n addons-020871
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 logs -n 25: (1.182498955s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-893315                                                                     | download-only-893315 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:31 UTC |
	| delete  | -p download-only-270974                                                                     | download-only-270974 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:31 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-453115 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC |                     |
	|         | binary-mirror-453115                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40829                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-453115                                                                     | binary-mirror-453115 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:31 UTC |
	| addons  | disable dashboard -p                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC |                     |
	|         | addons-020871                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC |                     |
	|         | addons-020871                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-020871 --wait=true                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:33 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:33 UTC | 16 Dec 24 10:33 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:33 UTC | 16 Dec 24 10:34 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | -p addons-020871                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-020871 addons                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-020871 ip                                                                            | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-020871 addons                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-020871 ssh curl -s                                                                   | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-020871 addons                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:34 UTC | 16 Dec 24 10:34 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-020871 ssh cat                                                                       | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | /opt/local-path-provisioner/pvc-2768e9dc-d30c-44a0-aa98-3d81d07df32d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-020871 addons                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-020871 addons                                                                        | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:35 UTC | 16 Dec 24 10:35 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-020871 ip                                                                            | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:36 UTC | 16 Dec 24 10:36 UTC |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:37 UTC | 16 Dec 24 10:37 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-020871 addons disable                                                                | addons-020871        | jenkins | v1.34.0 | 16 Dec 24 10:37 UTC | 16 Dec 24 10:37 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 10:31:37
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 10:31:37.831683  218168 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:31:37.831820  218168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:31:37.831830  218168 out.go:358] Setting ErrFile to fd 2...
	I1216 10:31:37.831834  218168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:31:37.832004  218168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 10:31:37.832670  218168 out.go:352] Setting JSON to false
	I1216 10:31:37.833631  218168 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8045,"bootTime":1734337053,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:31:37.833745  218168 start.go:139] virtualization: kvm guest
	I1216 10:31:37.836024  218168 out.go:177] * [addons-020871] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 10:31:37.837721  218168 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 10:31:37.837716  218168 notify.go:220] Checking for updates...
	I1216 10:31:37.839534  218168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:31:37.840926  218168 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 10:31:37.842333  218168 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 10:31:37.843678  218168 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 10:31:37.845224  218168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 10:31:37.846706  218168 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:31:37.882179  218168 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 10:31:37.883825  218168 start.go:297] selected driver: kvm2
	I1216 10:31:37.883850  218168 start.go:901] validating driver "kvm2" against <nil>
	I1216 10:31:37.883867  218168 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 10:31:37.884733  218168 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 10:31:37.884859  218168 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 10:31:37.902062  218168 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 10:31:37.902125  218168 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 10:31:37.902406  218168 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 10:31:37.902445  218168 cni.go:84] Creating CNI manager for ""
	I1216 10:31:37.902474  218168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 10:31:37.902483  218168 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 10:31:37.902531  218168 start.go:340] cluster config:
	{Name:addons-020871 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-020871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:31:37.902630  218168 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 10:31:37.904551  218168 out.go:177] * Starting "addons-020871" primary control-plane node in "addons-020871" cluster
	I1216 10:31:37.905745  218168 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:31:37.905798  218168 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1216 10:31:37.905812  218168 cache.go:56] Caching tarball of preloaded images
	I1216 10:31:37.905894  218168 preload.go:172] Found /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 10:31:37.905907  218168 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1216 10:31:37.906201  218168 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/config.json ...
	I1216 10:31:37.906231  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/config.json: {Name:mk776e5f2bcf43e15d10ef296a4be30c7dd13575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:31:37.906412  218168 start.go:360] acquireMachinesLock for addons-020871: {Name:mk416695f38f04563a2ca155c5383ee4d8f0f97b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 10:31:37.906481  218168 start.go:364] duration metric: took 49.257µs to acquireMachinesLock for "addons-020871"
	I1216 10:31:37.906510  218168 start.go:93] Provisioning new machine with config: &{Name:addons-020871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-020871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 10:31:37.906587  218168 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 10:31:37.908284  218168 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1216 10:31:37.908464  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:31:37.908525  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:31:37.924184  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35355
	I1216 10:31:37.924744  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:31:37.925341  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:31:37.925366  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:31:37.925731  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:31:37.925904  218168 main.go:141] libmachine: (addons-020871) Calling .GetMachineName
	I1216 10:31:37.926061  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:31:37.926231  218168 start.go:159] libmachine.API.Create for "addons-020871" (driver="kvm2")
	I1216 10:31:37.926280  218168 client.go:168] LocalClient.Create starting
	I1216 10:31:37.926326  218168 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem
	I1216 10:31:38.004936  218168 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem
	I1216 10:31:38.084095  218168 main.go:141] libmachine: Running pre-create checks...
	I1216 10:31:38.084121  218168 main.go:141] libmachine: (addons-020871) Calling .PreCreateCheck
	I1216 10:31:38.084724  218168 main.go:141] libmachine: (addons-020871) Calling .GetConfigRaw
	I1216 10:31:38.085284  218168 main.go:141] libmachine: Creating machine...
	I1216 10:31:38.085302  218168 main.go:141] libmachine: (addons-020871) Calling .Create
	I1216 10:31:38.085536  218168 main.go:141] libmachine: (addons-020871) creating KVM machine...
	I1216 10:31:38.085571  218168 main.go:141] libmachine: (addons-020871) creating network...
	I1216 10:31:38.086739  218168 main.go:141] libmachine: (addons-020871) DBG | found existing default KVM network
	I1216 10:31:38.087580  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:38.087426  218191 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015cc0}
	I1216 10:31:38.087635  218168 main.go:141] libmachine: (addons-020871) DBG | created network xml: 
	I1216 10:31:38.087659  218168 main.go:141] libmachine: (addons-020871) DBG | <network>
	I1216 10:31:38.087672  218168 main.go:141] libmachine: (addons-020871) DBG |   <name>mk-addons-020871</name>
	I1216 10:31:38.087690  218168 main.go:141] libmachine: (addons-020871) DBG |   <dns enable='no'/>
	I1216 10:31:38.087703  218168 main.go:141] libmachine: (addons-020871) DBG |   
	I1216 10:31:38.087718  218168 main.go:141] libmachine: (addons-020871) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1216 10:31:38.087772  218168 main.go:141] libmachine: (addons-020871) DBG |     <dhcp>
	I1216 10:31:38.087812  218168 main.go:141] libmachine: (addons-020871) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1216 10:31:38.087862  218168 main.go:141] libmachine: (addons-020871) DBG |     </dhcp>
	I1216 10:31:38.087892  218168 main.go:141] libmachine: (addons-020871) DBG |   </ip>
	I1216 10:31:38.087900  218168 main.go:141] libmachine: (addons-020871) DBG |   
	I1216 10:31:38.087913  218168 main.go:141] libmachine: (addons-020871) DBG | </network>
	I1216 10:31:38.087931  218168 main.go:141] libmachine: (addons-020871) DBG | 
	I1216 10:31:38.093660  218168 main.go:141] libmachine: (addons-020871) DBG | trying to create private KVM network mk-addons-020871 192.168.39.0/24...
	I1216 10:31:38.163864  218168 main.go:141] libmachine: (addons-020871) setting up store path in /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871 ...
	I1216 10:31:38.163907  218168 main.go:141] libmachine: (addons-020871) building disk image from file:///home/jenkins/minikube-integration/20107-210204/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1216 10:31:38.163921  218168 main.go:141] libmachine: (addons-020871) DBG | private KVM network mk-addons-020871 192.168.39.0/24 created
	I1216 10:31:38.163986  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:38.163773  218191 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 10:31:38.164015  218168 main.go:141] libmachine: (addons-020871) Downloading /home/jenkins/minikube-integration/20107-210204/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20107-210204/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1216 10:31:38.438084  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:38.437934  218191 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa...
	I1216 10:31:38.487252  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:38.487072  218191 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/addons-020871.rawdisk...
	I1216 10:31:38.487301  218168 main.go:141] libmachine: (addons-020871) DBG | Writing magic tar header
	I1216 10:31:38.487316  218168 main.go:141] libmachine: (addons-020871) DBG | Writing SSH key tar header
	I1216 10:31:38.487327  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:38.487213  218191 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871 ...
	I1216 10:31:38.487344  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871
	I1216 10:31:38.487412  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube/machines
	I1216 10:31:38.487439  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871 (perms=drwx------)
	I1216 10:31:38.487452  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 10:31:38.487483  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube/machines (perms=drwxr-xr-x)
	I1216 10:31:38.487503  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204
	I1216 10:31:38.487511  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube (perms=drwxr-xr-x)
	I1216 10:31:38.487523  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins/minikube-integration/20107-210204 (perms=drwxrwxr-x)
	I1216 10:31:38.487532  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1216 10:31:38.487538  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 10:31:38.487549  218168 main.go:141] libmachine: (addons-020871) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 10:31:38.487556  218168 main.go:141] libmachine: (addons-020871) creating domain...
	I1216 10:31:38.487566  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home/jenkins
	I1216 10:31:38.487571  218168 main.go:141] libmachine: (addons-020871) DBG | checking permissions on dir: /home
	I1216 10:31:38.487581  218168 main.go:141] libmachine: (addons-020871) DBG | skipping /home - not owner
	I1216 10:31:38.488662  218168 main.go:141] libmachine: (addons-020871) define libvirt domain using xml: 
	I1216 10:31:38.488691  218168 main.go:141] libmachine: (addons-020871) <domain type='kvm'>
	I1216 10:31:38.488702  218168 main.go:141] libmachine: (addons-020871)   <name>addons-020871</name>
	I1216 10:31:38.488709  218168 main.go:141] libmachine: (addons-020871)   <memory unit='MiB'>4000</memory>
	I1216 10:31:38.488716  218168 main.go:141] libmachine: (addons-020871)   <vcpu>2</vcpu>
	I1216 10:31:38.488722  218168 main.go:141] libmachine: (addons-020871)   <features>
	I1216 10:31:38.488733  218168 main.go:141] libmachine: (addons-020871)     <acpi/>
	I1216 10:31:38.488739  218168 main.go:141] libmachine: (addons-020871)     <apic/>
	I1216 10:31:38.488745  218168 main.go:141] libmachine: (addons-020871)     <pae/>
	I1216 10:31:38.488756  218168 main.go:141] libmachine: (addons-020871)     
	I1216 10:31:38.488767  218168 main.go:141] libmachine: (addons-020871)   </features>
	I1216 10:31:38.488777  218168 main.go:141] libmachine: (addons-020871)   <cpu mode='host-passthrough'>
	I1216 10:31:38.488786  218168 main.go:141] libmachine: (addons-020871)   
	I1216 10:31:38.488795  218168 main.go:141] libmachine: (addons-020871)   </cpu>
	I1216 10:31:38.488805  218168 main.go:141] libmachine: (addons-020871)   <os>
	I1216 10:31:38.488814  218168 main.go:141] libmachine: (addons-020871)     <type>hvm</type>
	I1216 10:31:38.488825  218168 main.go:141] libmachine: (addons-020871)     <boot dev='cdrom'/>
	I1216 10:31:38.488839  218168 main.go:141] libmachine: (addons-020871)     <boot dev='hd'/>
	I1216 10:31:38.488849  218168 main.go:141] libmachine: (addons-020871)     <bootmenu enable='no'/>
	I1216 10:31:38.488856  218168 main.go:141] libmachine: (addons-020871)   </os>
	I1216 10:31:38.488861  218168 main.go:141] libmachine: (addons-020871)   <devices>
	I1216 10:31:38.488868  218168 main.go:141] libmachine: (addons-020871)     <disk type='file' device='cdrom'>
	I1216 10:31:38.488877  218168 main.go:141] libmachine: (addons-020871)       <source file='/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/boot2docker.iso'/>
	I1216 10:31:38.488884  218168 main.go:141] libmachine: (addons-020871)       <target dev='hdc' bus='scsi'/>
	I1216 10:31:38.488889  218168 main.go:141] libmachine: (addons-020871)       <readonly/>
	I1216 10:31:38.488895  218168 main.go:141] libmachine: (addons-020871)     </disk>
	I1216 10:31:38.488926  218168 main.go:141] libmachine: (addons-020871)     <disk type='file' device='disk'>
	I1216 10:31:38.488944  218168 main.go:141] libmachine: (addons-020871)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1216 10:31:38.488974  218168 main.go:141] libmachine: (addons-020871)       <source file='/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/addons-020871.rawdisk'/>
	I1216 10:31:38.488983  218168 main.go:141] libmachine: (addons-020871)       <target dev='hda' bus='virtio'/>
	I1216 10:31:38.488992  218168 main.go:141] libmachine: (addons-020871)     </disk>
	I1216 10:31:38.489004  218168 main.go:141] libmachine: (addons-020871)     <interface type='network'>
	I1216 10:31:38.489042  218168 main.go:141] libmachine: (addons-020871)       <source network='mk-addons-020871'/>
	I1216 10:31:38.489068  218168 main.go:141] libmachine: (addons-020871)       <model type='virtio'/>
	I1216 10:31:38.489079  218168 main.go:141] libmachine: (addons-020871)     </interface>
	I1216 10:31:38.489090  218168 main.go:141] libmachine: (addons-020871)     <interface type='network'>
	I1216 10:31:38.489101  218168 main.go:141] libmachine: (addons-020871)       <source network='default'/>
	I1216 10:31:38.489111  218168 main.go:141] libmachine: (addons-020871)       <model type='virtio'/>
	I1216 10:31:38.489121  218168 main.go:141] libmachine: (addons-020871)     </interface>
	I1216 10:31:38.489132  218168 main.go:141] libmachine: (addons-020871)     <serial type='pty'>
	I1216 10:31:38.489145  218168 main.go:141] libmachine: (addons-020871)       <target port='0'/>
	I1216 10:31:38.489167  218168 main.go:141] libmachine: (addons-020871)     </serial>
	I1216 10:31:38.489179  218168 main.go:141] libmachine: (addons-020871)     <console type='pty'>
	I1216 10:31:38.489190  218168 main.go:141] libmachine: (addons-020871)       <target type='serial' port='0'/>
	I1216 10:31:38.489203  218168 main.go:141] libmachine: (addons-020871)     </console>
	I1216 10:31:38.489213  218168 main.go:141] libmachine: (addons-020871)     <rng model='virtio'>
	I1216 10:31:38.489224  218168 main.go:141] libmachine: (addons-020871)       <backend model='random'>/dev/random</backend>
	I1216 10:31:38.489239  218168 main.go:141] libmachine: (addons-020871)     </rng>
	I1216 10:31:38.489251  218168 main.go:141] libmachine: (addons-020871)     
	I1216 10:31:38.489258  218168 main.go:141] libmachine: (addons-020871)     
	I1216 10:31:38.489270  218168 main.go:141] libmachine: (addons-020871)   </devices>
	I1216 10:31:38.489279  218168 main.go:141] libmachine: (addons-020871) </domain>
	I1216 10:31:38.489313  218168 main.go:141] libmachine: (addons-020871) 
	I1216 10:31:38.495034  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:af:86:ab in network default
	I1216 10:31:38.495540  218168 main.go:141] libmachine: (addons-020871) starting domain...
	I1216 10:31:38.495560  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:38.495565  218168 main.go:141] libmachine: (addons-020871) ensuring networks are active...
	I1216 10:31:38.496296  218168 main.go:141] libmachine: (addons-020871) Ensuring network default is active
	I1216 10:31:38.496621  218168 main.go:141] libmachine: (addons-020871) Ensuring network mk-addons-020871 is active
	I1216 10:31:38.497128  218168 main.go:141] libmachine: (addons-020871) getting domain XML...
	I1216 10:31:38.497763  218168 main.go:141] libmachine: (addons-020871) creating domain...
	I1216 10:31:39.907343  218168 main.go:141] libmachine: (addons-020871) waiting for IP...
	I1216 10:31:39.908131  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:39.908659  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:39.908689  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:39.908636  218191 retry.go:31] will retry after 311.745376ms: waiting for domain to come up
	I1216 10:31:40.222162  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:40.222649  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:40.222676  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:40.222623  218191 retry.go:31] will retry after 353.739286ms: waiting for domain to come up
	I1216 10:31:40.578472  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:40.578893  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:40.578935  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:40.578875  218191 retry.go:31] will retry after 384.988826ms: waiting for domain to come up
	I1216 10:31:40.965819  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:40.966402  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:40.966442  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:40.966371  218191 retry.go:31] will retry after 461.65384ms: waiting for domain to come up
	I1216 10:31:41.430075  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:41.430489  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:41.430525  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:41.430468  218191 retry.go:31] will retry after 500.241235ms: waiting for domain to come up
	I1216 10:31:41.932193  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:41.932572  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:41.932599  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:41.932559  218191 retry.go:31] will retry after 705.18908ms: waiting for domain to come up
	I1216 10:31:42.639118  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:42.639593  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:42.639620  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:42.639560  218191 retry.go:31] will retry after 1.064300662s: waiting for domain to come up
	I1216 10:31:43.705582  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:43.706052  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:43.706078  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:43.706014  218191 retry.go:31] will retry after 1.08333648s: waiting for domain to come up
	I1216 10:31:44.790719  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:44.791148  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:44.791174  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:44.791117  218191 retry.go:31] will retry after 1.713698041s: waiting for domain to come up
	I1216 10:31:46.506060  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:46.506525  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:46.506564  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:46.506510  218191 retry.go:31] will retry after 1.515937487s: waiting for domain to come up
	I1216 10:31:48.024268  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:48.024710  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:48.024741  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:48.024681  218191 retry.go:31] will retry after 2.369610901s: waiting for domain to come up
	I1216 10:31:50.397271  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:50.397649  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:50.397681  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:50.397619  218191 retry.go:31] will retry after 2.457466679s: waiting for domain to come up
	I1216 10:31:52.858207  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:52.858676  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:52.858701  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:52.858636  218191 retry.go:31] will retry after 3.867577059s: waiting for domain to come up
	I1216 10:31:56.727503  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:31:56.727939  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find current IP address of domain addons-020871 in network mk-addons-020871
	I1216 10:31:56.727967  218168 main.go:141] libmachine: (addons-020871) DBG | I1216 10:31:56.727899  218191 retry.go:31] will retry after 4.324595651s: waiting for domain to come up
	I1216 10:32:01.056520  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.056947  218168 main.go:141] libmachine: (addons-020871) found domain IP: 192.168.39.206
	I1216 10:32:01.056988  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has current primary IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.056997  218168 main.go:141] libmachine: (addons-020871) reserving static IP address...
	I1216 10:32:01.057427  218168 main.go:141] libmachine: (addons-020871) DBG | unable to find host DHCP lease matching {name: "addons-020871", mac: "52:54:00:2f:5d:dc", ip: "192.168.39.206"} in network mk-addons-020871
	I1216 10:32:01.136665  218168 main.go:141] libmachine: (addons-020871) DBG | Getting to WaitForSSH function...
	I1216 10:32:01.136698  218168 main.go:141] libmachine: (addons-020871) reserved static IP address 192.168.39.206 for domain addons-020871
	I1216 10:32:01.136713  218168 main.go:141] libmachine: (addons-020871) waiting for SSH...
	I1216 10:32:01.139566  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.140104  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.140139  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.140318  218168 main.go:141] libmachine: (addons-020871) DBG | Using SSH client type: external
	I1216 10:32:01.140348  218168 main.go:141] libmachine: (addons-020871) DBG | Using SSH private key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa (-rw-------)
	I1216 10:32:01.140386  218168 main.go:141] libmachine: (addons-020871) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 10:32:01.140407  218168 main.go:141] libmachine: (addons-020871) DBG | About to run SSH command:
	I1216 10:32:01.140422  218168 main.go:141] libmachine: (addons-020871) DBG | exit 0
	I1216 10:32:01.265279  218168 main.go:141] libmachine: (addons-020871) DBG | SSH cmd err, output: <nil>: 
	I1216 10:32:01.265552  218168 main.go:141] libmachine: (addons-020871) KVM machine creation complete
	I1216 10:32:01.265952  218168 main.go:141] libmachine: (addons-020871) Calling .GetConfigRaw
	I1216 10:32:01.266616  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:01.266809  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:01.267014  218168 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1216 10:32:01.267032  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:01.268377  218168 main.go:141] libmachine: Detecting operating system of created instance...
	I1216 10:32:01.268396  218168 main.go:141] libmachine: Waiting for SSH to be available...
	I1216 10:32:01.268402  218168 main.go:141] libmachine: Getting to WaitForSSH function...
	I1216 10:32:01.268410  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.270644  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.270993  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.271017  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.271158  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.271363  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.271554  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.271709  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.271883  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:01.272130  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:01.272144  218168 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1216 10:32:01.372377  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 10:32:01.372406  218168 main.go:141] libmachine: Detecting the provisioner...
	I1216 10:32:01.372416  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.375331  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.375676  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.375706  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.375911  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.376117  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.376289  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.376454  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.376597  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:01.376771  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:01.376781  218168 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1216 10:32:01.481775  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1216 10:32:01.481883  218168 main.go:141] libmachine: found compatible host: buildroot
	I1216 10:32:01.481893  218168 main.go:141] libmachine: Provisioning with buildroot...
	I1216 10:32:01.481901  218168 main.go:141] libmachine: (addons-020871) Calling .GetMachineName
	I1216 10:32:01.482209  218168 buildroot.go:166] provisioning hostname "addons-020871"
	I1216 10:32:01.482244  218168 main.go:141] libmachine: (addons-020871) Calling .GetMachineName
	I1216 10:32:01.482503  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.485099  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.485450  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.485480  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.485596  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.485775  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.485934  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.486092  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.486317  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:01.486498  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:01.486512  218168 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-020871 && echo "addons-020871" | sudo tee /etc/hostname
	I1216 10:32:01.603470  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-020871
	
	I1216 10:32:01.603509  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.606372  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.606725  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.606759  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.607007  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.607250  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.607417  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.607514  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.607654  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:01.607843  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:01.607866  218168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-020871' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-020871/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-020871' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 10:32:01.718115  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 10:32:01.718147  218168 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20107-210204/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-210204/.minikube}
	I1216 10:32:01.718204  218168 buildroot.go:174] setting up certificates
	I1216 10:32:01.718225  218168 provision.go:84] configureAuth start
	I1216 10:32:01.718240  218168 main.go:141] libmachine: (addons-020871) Calling .GetMachineName
	I1216 10:32:01.718553  218168 main.go:141] libmachine: (addons-020871) Calling .GetIP
	I1216 10:32:01.721544  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.721914  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.721939  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.722123  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.724791  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.725201  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.725230  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.725366  218168 provision.go:143] copyHostCerts
	I1216 10:32:01.725448  218168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem (1078 bytes)
	I1216 10:32:01.725596  218168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem (1123 bytes)
	I1216 10:32:01.725691  218168 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem (1679 bytes)
	I1216 10:32:01.725776  218168 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem org=jenkins.addons-020871 san=[127.0.0.1 192.168.39.206 addons-020871 localhost minikube]
	I1216 10:32:01.805753  218168 provision.go:177] copyRemoteCerts
	I1216 10:32:01.805820  218168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 10:32:01.805848  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.809048  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.809446  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.809477  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.809686  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.809911  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.810078  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.810246  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:01.891456  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 10:32:01.915662  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 10:32:01.940569  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 10:32:01.965262  218168 provision.go:87] duration metric: took 247.018075ms to configureAuth
	I1216 10:32:01.965300  218168 buildroot.go:189] setting minikube options for container-runtime
	I1216 10:32:01.965549  218168 config.go:182] Loaded profile config "addons-020871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:32:01.965665  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:01.968932  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.969400  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:01.969436  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:01.969683  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:01.969883  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.970048  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:01.970187  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:01.970401  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:01.970581  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:01.970595  218168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 10:32:02.190074  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 10:32:02.190101  218168 main.go:141] libmachine: Checking connection to Docker...
	I1216 10:32:02.190110  218168 main.go:141] libmachine: (addons-020871) Calling .GetURL
	I1216 10:32:02.191391  218168 main.go:141] libmachine: (addons-020871) DBG | using libvirt version 6000000
	I1216 10:32:02.193602  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.193990  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.194018  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.194454  218168 main.go:141] libmachine: Docker is up and running!
	I1216 10:32:02.194471  218168 main.go:141] libmachine: Reticulating splines...
	I1216 10:32:02.194483  218168 client.go:171] duration metric: took 24.268188098s to LocalClient.Create
	I1216 10:32:02.194517  218168 start.go:167] duration metric: took 24.268285342s to libmachine.API.Create "addons-020871"
	I1216 10:32:02.194544  218168 start.go:293] postStartSetup for "addons-020871" (driver="kvm2")
	I1216 10:32:02.194561  218168 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 10:32:02.194592  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:02.194855  218168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 10:32:02.194889  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:02.197387  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.197712  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.197750  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.197912  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:02.198175  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:02.198345  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:02.198493  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:02.279622  218168 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 10:32:02.284224  218168 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 10:32:02.284258  218168 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/addons for local assets ...
	I1216 10:32:02.284358  218168 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/files for local assets ...
	I1216 10:32:02.284390  218168 start.go:296] duration metric: took 89.836076ms for postStartSetup
	I1216 10:32:02.284438  218168 main.go:141] libmachine: (addons-020871) Calling .GetConfigRaw
	I1216 10:32:02.285205  218168 main.go:141] libmachine: (addons-020871) Calling .GetIP
	I1216 10:32:02.287975  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.288336  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.288362  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.288632  218168 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/config.json ...
	I1216 10:32:02.288841  218168 start.go:128] duration metric: took 24.382241529s to createHost
	I1216 10:32:02.288871  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:02.291317  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.291621  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.291640  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.291821  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:02.292016  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:02.292211  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:02.292375  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:02.292540  218168 main.go:141] libmachine: Using SSH client type: native
	I1216 10:32:02.292712  218168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1216 10:32:02.292721  218168 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 10:32:02.393886  218168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734345122.370190624
	
	I1216 10:32:02.393919  218168 fix.go:216] guest clock: 1734345122.370190624
	I1216 10:32:02.393927  218168 fix.go:229] Guest: 2024-12-16 10:32:02.370190624 +0000 UTC Remote: 2024-12-16 10:32:02.288857281 +0000 UTC m=+24.498804031 (delta=81.333343ms)
	I1216 10:32:02.393987  218168 fix.go:200] guest clock delta is within tolerance: 81.333343ms
	I1216 10:32:02.393995  218168 start.go:83] releasing machines lock for "addons-020871", held for 24.487500312s
	I1216 10:32:02.394044  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:02.394391  218168 main.go:141] libmachine: (addons-020871) Calling .GetIP
	I1216 10:32:02.397223  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.397531  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.397561  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.397705  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:02.398372  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:02.398595  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:02.398713  218168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 10:32:02.398777  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:02.398909  218168 ssh_runner.go:195] Run: cat /version.json
	I1216 10:32:02.398931  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:02.401953  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.401983  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.402321  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.402344  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.402381  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:02.402404  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:02.402499  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:02.402614  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:02.402685  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:02.402758  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:02.402848  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:02.402934  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:02.403000  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:02.403051  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:02.504015  218168 ssh_runner.go:195] Run: systemctl --version
	I1216 10:32:02.510362  218168 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 10:32:02.678147  218168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 10:32:02.684462  218168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 10:32:02.684555  218168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 10:32:02.700378  218168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 10:32:02.700422  218168 start.go:495] detecting cgroup driver to use...
	I1216 10:32:02.700497  218168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 10:32:02.718240  218168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 10:32:02.732289  218168 docker.go:217] disabling cri-docker service (if available) ...
	I1216 10:32:02.732390  218168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 10:32:02.746524  218168 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 10:32:02.760402  218168 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 10:32:02.871922  218168 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 10:32:03.013544  218168 docker.go:233] disabling docker service ...
	I1216 10:32:03.013632  218168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 10:32:03.028186  218168 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 10:32:03.041790  218168 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 10:32:03.189349  218168 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 10:32:03.319128  218168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 10:32:03.336881  218168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 10:32:03.357997  218168 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 10:32:03.358068  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.368801  218168 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 10:32:03.368882  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.379501  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.390004  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.401053  218168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 10:32:03.411996  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.422967  218168 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.440604  218168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 10:32:03.451723  218168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 10:32:03.461757  218168 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 10:32:03.461826  218168 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 10:32:03.476016  218168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 10:32:03.486214  218168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:32:03.601384  218168 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 10:32:03.696632  218168 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 10:32:03.696754  218168 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 10:32:03.701692  218168 start.go:563] Will wait 60s for crictl version
	I1216 10:32:03.701777  218168 ssh_runner.go:195] Run: which crictl
	I1216 10:32:03.705740  218168 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 10:32:03.743157  218168 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 10:32:03.743299  218168 ssh_runner.go:195] Run: crio --version
	I1216 10:32:03.770395  218168 ssh_runner.go:195] Run: crio --version
	I1216 10:32:03.799779  218168 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1216 10:32:03.801195  218168 main.go:141] libmachine: (addons-020871) Calling .GetIP
	I1216 10:32:03.805053  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:03.805579  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:03.805604  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:03.805946  218168 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 10:32:03.810299  218168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 10:32:03.825080  218168 kubeadm.go:883] updating cluster {Name:addons-020871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-020871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 10:32:03.825232  218168 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:32:03.825291  218168 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 10:32:03.863706  218168 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1216 10:32:03.863779  218168 ssh_runner.go:195] Run: which lz4
	I1216 10:32:03.868037  218168 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 10:32:03.872594  218168 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 10:32:03.872633  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1216 10:32:05.105305  218168 crio.go:462] duration metric: took 1.237299066s to copy over tarball
	I1216 10:32:05.105397  218168 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 10:32:07.273674  218168 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.168247404s)
	I1216 10:32:07.273704  218168 crio.go:469] duration metric: took 2.168362347s to extract the tarball
	I1216 10:32:07.273719  218168 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 10:32:07.310695  218168 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 10:32:07.351085  218168 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 10:32:07.351114  218168 cache_images.go:84] Images are preloaded, skipping loading
	I1216 10:32:07.351122  218168 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.2 crio true true} ...
	I1216 10:32:07.351250  218168 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-020871 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-020871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 10:32:07.351319  218168 ssh_runner.go:195] Run: crio config
	I1216 10:32:07.395680  218168 cni.go:84] Creating CNI manager for ""
	I1216 10:32:07.395704  218168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 10:32:07.395718  218168 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 10:32:07.395747  218168 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-020871 NodeName:addons-020871 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 10:32:07.395894  218168 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-020871"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 10:32:07.395975  218168 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1216 10:32:07.405401  218168 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 10:32:07.405501  218168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 10:32:07.414266  218168 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1216 10:32:07.431082  218168 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 10:32:07.448649  218168 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1216 10:32:07.466874  218168 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1216 10:32:07.470995  218168 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 10:32:07.483471  218168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:32:07.618146  218168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 10:32:07.638427  218168 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871 for IP: 192.168.39.206
	I1216 10:32:07.638459  218168 certs.go:194] generating shared ca certs ...
	I1216 10:32:07.638478  218168 certs.go:226] acquiring lock for ca certs: {Name:mkd97e0b0fea726104cb674caf81491f1144961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:07.638621  218168 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key
	I1216 10:32:07.945451  218168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt ...
	I1216 10:32:07.945491  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt: {Name:mk1b7e8e8343576c2625ea5df4c030990d1ed65c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:07.945686  218168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key ...
	I1216 10:32:07.945698  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key: {Name:mk150bb71a4d8bf2f7e593f850c268c3c5fb2826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:07.945776  218168 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key
	I1216 10:32:08.109781  218168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt ...
	I1216 10:32:08.109815  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt: {Name:mk7e569a459979b9ea3d41410c35f8efe6998d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.109997  218168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key ...
	I1216 10:32:08.110009  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key: {Name:mk7ecd3ed39b16ddc6e66b6c0ea0b6c9210b002b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.110079  218168 certs.go:256] generating profile certs ...
	I1216 10:32:08.110139  218168 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.key
	I1216 10:32:08.110153  218168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt with IP's: []
	I1216 10:32:08.285579  218168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt ...
	I1216 10:32:08.285620  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: {Name:mkd08a22d82c4cc9512ecba9ceb09ba16c728d8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.285806  218168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.key ...
	I1216 10:32:08.285818  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.key: {Name:mk7276986ea3e6e4bc9c4fe350372f9761df7065 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.285886  218168 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key.50f7fc57
	I1216 10:32:08.285905  218168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt.50f7fc57 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206]
	I1216 10:32:08.541997  218168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt.50f7fc57 ...
	I1216 10:32:08.542035  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt.50f7fc57: {Name:mkd8f73129f025770e82dc30cd4115ec508353a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.542203  218168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key.50f7fc57 ...
	I1216 10:32:08.542216  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key.50f7fc57: {Name:mk370ffb9ac91a9357cf5a90ed38d9a141605ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.542297  218168 certs.go:381] copying /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt.50f7fc57 -> /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt
	I1216 10:32:08.542374  218168 certs.go:385] copying /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key.50f7fc57 -> /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key
	I1216 10:32:08.542470  218168 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.key
	I1216 10:32:08.542506  218168 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.crt with IP's: []
	I1216 10:32:08.723196  218168 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.crt ...
	I1216 10:32:08.723234  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.crt: {Name:mkacaad176092c140f7a012d05a90c00be134aac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.723407  218168 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.key ...
	I1216 10:32:08.723421  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.key: {Name:mk860f21b3c7b9d6776d96c559f45e802c46a833 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:08.723596  218168 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 10:32:08.723635  218168 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem (1078 bytes)
	I1216 10:32:08.723660  218168 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem (1123 bytes)
	I1216 10:32:08.723686  218168 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem (1679 bytes)
	I1216 10:32:08.724358  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 10:32:08.753250  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 10:32:08.777782  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 10:32:08.802474  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 10:32:08.827536  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 10:32:08.852092  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 10:32:08.877377  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 10:32:08.901431  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 10:32:08.926233  218168 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 10:32:08.950234  218168 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 10:32:08.966758  218168 ssh_runner.go:195] Run: openssl version
	I1216 10:32:08.972276  218168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 10:32:08.982674  218168 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:08.986974  218168 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:08.987041  218168 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 10:32:08.992647  218168 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 10:32:09.003085  218168 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 10:32:09.007224  218168 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 10:32:09.007289  218168 kubeadm.go:392] StartCluster: {Name:addons-020871 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-020871 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:32:09.007381  218168 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 10:32:09.007434  218168 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 10:32:09.046100  218168 cri.go:89] found id: ""
	I1216 10:32:09.046179  218168 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 10:32:09.057442  218168 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 10:32:09.066845  218168 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 10:32:09.076105  218168 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 10:32:09.076128  218168 kubeadm.go:157] found existing configuration files:
	
	I1216 10:32:09.076175  218168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 10:32:09.084912  218168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 10:32:09.084982  218168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 10:32:09.094406  218168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 10:32:09.103008  218168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 10:32:09.103070  218168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 10:32:09.111943  218168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 10:32:09.120703  218168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 10:32:09.120787  218168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 10:32:09.129883  218168 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 10:32:09.138823  218168 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 10:32:09.138887  218168 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 10:32:09.148252  218168 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 10:32:09.197543  218168 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1216 10:32:09.197656  218168 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 10:32:09.302443  218168 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 10:32:09.302605  218168 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 10:32:09.302751  218168 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 10:32:09.310329  218168 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 10:32:09.443780  218168 out.go:235]   - Generating certificates and keys ...
	I1216 10:32:09.443932  218168 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 10:32:09.444028  218168 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 10:32:09.469694  218168 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 10:32:09.627018  218168 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 10:32:09.763666  218168 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 10:32:09.924584  218168 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 10:32:09.995439  218168 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 10:32:09.995632  218168 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-020871 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1216 10:32:10.377385  218168 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 10:32:10.377695  218168 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-020871 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1216 10:32:10.550043  218168 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 10:32:10.847457  218168 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 10:32:11.009055  218168 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 10:32:11.009235  218168 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 10:32:11.165166  218168 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 10:32:12.065836  218168 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 10:32:12.562745  218168 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 10:32:12.808599  218168 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 10:32:12.994204  218168 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 10:32:12.994920  218168 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 10:32:12.997524  218168 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 10:32:12.999294  218168 out.go:235]   - Booting up control plane ...
	I1216 10:32:12.999435  218168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 10:32:12.999557  218168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 10:32:12.999667  218168 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 10:32:13.015097  218168 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 10:32:13.021271  218168 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 10:32:13.021361  218168 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 10:32:13.149175  218168 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 10:32:13.149314  218168 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 10:32:13.650446  218168 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.879834ms
	I1216 10:32:13.650558  218168 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 10:32:18.649823  218168 kubeadm.go:310] [api-check] The API server is healthy after 5.002069951s
	I1216 10:32:18.661734  218168 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 10:32:18.673165  218168 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 10:32:18.705864  218168 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 10:32:18.706066  218168 kubeadm.go:310] [mark-control-plane] Marking the node addons-020871 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 10:32:18.723492  218168 kubeadm.go:310] [bootstrap-token] Using token: ziqwd1.bjhky6co4258z758
	I1216 10:32:18.724885  218168 out.go:235]   - Configuring RBAC rules ...
	I1216 10:32:18.725056  218168 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 10:32:18.733707  218168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 10:32:18.741920  218168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 10:32:18.746087  218168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 10:32:18.749427  218168 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 10:32:18.754358  218168 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 10:32:19.057987  218168 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 10:32:19.486197  218168 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 10:32:20.057446  218168 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 10:32:20.058208  218168 kubeadm.go:310] 
	I1216 10:32:20.058288  218168 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 10:32:20.058330  218168 kubeadm.go:310] 
	I1216 10:32:20.058471  218168 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 10:32:20.058483  218168 kubeadm.go:310] 
	I1216 10:32:20.058521  218168 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 10:32:20.058605  218168 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 10:32:20.058703  218168 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 10:32:20.058725  218168 kubeadm.go:310] 
	I1216 10:32:20.058809  218168 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 10:32:20.058817  218168 kubeadm.go:310] 
	I1216 10:32:20.058895  218168 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 10:32:20.058909  218168 kubeadm.go:310] 
	I1216 10:32:20.058990  218168 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 10:32:20.059096  218168 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 10:32:20.059197  218168 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 10:32:20.059207  218168 kubeadm.go:310] 
	I1216 10:32:20.059325  218168 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 10:32:20.059436  218168 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 10:32:20.059445  218168 kubeadm.go:310] 
	I1216 10:32:20.059551  218168 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ziqwd1.bjhky6co4258z758 \
	I1216 10:32:20.059717  218168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:308d0b4d152056f6ea26bd81937e5552170019dfa040d2f99e94fa77bd33e210 \
	I1216 10:32:20.059758  218168 kubeadm.go:310] 	--control-plane 
	I1216 10:32:20.059769  218168 kubeadm.go:310] 
	I1216 10:32:20.059885  218168 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 10:32:20.059895  218168 kubeadm.go:310] 
	I1216 10:32:20.060134  218168 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ziqwd1.bjhky6co4258z758 \
	I1216 10:32:20.060270  218168 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:308d0b4d152056f6ea26bd81937e5552170019dfa040d2f99e94fa77bd33e210 
	I1216 10:32:20.060912  218168 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 10:32:20.061053  218168 cni.go:84] Creating CNI manager for ""
	I1216 10:32:20.061069  218168 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 10:32:20.063059  218168 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 10:32:20.064373  218168 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 10:32:20.074492  218168 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 10:32:20.094996  218168 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 10:32:20.095101  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:20.095136  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-020871 minikube.k8s.io/updated_at=2024_12_16T10_32_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8 minikube.k8s.io/name=addons-020871 minikube.k8s.io/primary=true
	I1216 10:32:20.118932  218168 ops.go:34] apiserver oom_adj: -16
	I1216 10:32:20.245820  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:20.746190  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:21.246196  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:21.746565  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:22.246196  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:22.745963  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:23.246513  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:23.746809  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:24.245981  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:24.746333  218168 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 10:32:24.880496  218168 kubeadm.go:1113] duration metric: took 4.785479938s to wait for elevateKubeSystemPrivileges
	I1216 10:32:24.880551  218168 kubeadm.go:394] duration metric: took 15.873268149s to StartCluster
	I1216 10:32:24.880578  218168 settings.go:142] acquiring lock: {Name:mk3e7a5ea729d05e36deedcd45fd7829ccafa72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:24.880735  218168 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 10:32:24.881342  218168 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/kubeconfig: {Name:mk57b3cf865454cf7d68479ab0b998d52a5c91e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 10:32:24.881628  218168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 10:32:24.881639  218168 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 10:32:24.881731  218168 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 10:32:24.881870  218168 config.go:182] Loaded profile config "addons-020871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:32:24.881882  218168 addons.go:69] Setting yakd=true in profile "addons-020871"
	I1216 10:32:24.881894  218168 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-020871"
	I1216 10:32:24.881905  218168 addons.go:234] Setting addon yakd=true in "addons-020871"
	I1216 10:32:24.881993  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.881908  218168 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-020871"
	I1216 10:32:24.882105  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.881913  218168 addons.go:69] Setting storage-provisioner=true in profile "addons-020871"
	I1216 10:32:24.882176  218168 addons.go:234] Setting addon storage-provisioner=true in "addons-020871"
	I1216 10:32:24.882211  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.881919  218168 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-020871"
	I1216 10:32:24.882250  218168 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-020871"
	I1216 10:32:24.882294  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.881919  218168 addons.go:69] Setting registry=true in profile "addons-020871"
	I1216 10:32:24.882344  218168 addons.go:234] Setting addon registry=true in "addons-020871"
	I1216 10:32:24.882389  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.882506  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.882554  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.882564  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.882593  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.882644  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.882644  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.882670  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.882815  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.882862  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.882819  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.881927  218168 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-020871"
	I1216 10:32:24.881930  218168 addons.go:69] Setting volcano=true in profile "addons-020871"
	I1216 10:32:24.883224  218168 addons.go:234] Setting addon volcano=true in "addons-020871"
	I1216 10:32:24.883289  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.881932  218168 addons.go:69] Setting ingress=true in profile "addons-020871"
	I1216 10:32:24.883339  218168 addons.go:234] Setting addon ingress=true in "addons-020871"
	I1216 10:32:24.883376  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.883668  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.883697  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.883736  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.881935  218168 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-020871"
	I1216 10:32:24.883758  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.883778  218168 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-020871"
	I1216 10:32:24.883898  218168 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-020871"
	I1216 10:32:24.881938  218168 addons.go:69] Setting ingress-dns=true in profile "addons-020871"
	I1216 10:32:24.884002  218168 addons.go:234] Setting addon ingress-dns=true in "addons-020871"
	I1216 10:32:24.881876  218168 addons.go:69] Setting cloud-spanner=true in profile "addons-020871"
	I1216 10:32:24.881941  218168 addons.go:69] Setting volumesnapshots=true in profile "addons-020871"
	I1216 10:32:24.881941  218168 addons.go:69] Setting default-storageclass=true in profile "addons-020871"
	I1216 10:32:24.881933  218168 addons.go:69] Setting gcp-auth=true in profile "addons-020871"
	I1216 10:32:24.881944  218168 addons.go:69] Setting inspektor-gadget=true in profile "addons-020871"
	I1216 10:32:24.881910  218168 addons.go:69] Setting metrics-server=true in profile "addons-020871"
	I1216 10:32:24.884110  218168 addons.go:234] Setting addon metrics-server=true in "addons-020871"
	I1216 10:32:24.884127  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.884165  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.884177  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.884198  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.884391  218168 out.go:177] * Verifying Kubernetes components...
	I1216 10:32:24.884522  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.884554  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.884578  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.884620  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.884683  218168 mustload.go:65] Loading cluster: addons-020871
	I1216 10:32:24.884700  218168 addons.go:234] Setting addon volumesnapshots=true in "addons-020871"
	I1216 10:32:24.884724  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.884683  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.884786  218168 addons.go:234] Setting addon inspektor-gadget=true in "addons-020871"
	I1216 10:32:24.885010  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.885285  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.885308  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.885391  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.885413  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.884089  218168 addons.go:234] Setting addon cloud-spanner=true in "addons-020871"
	I1216 10:32:24.885493  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.884806  218168 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-020871"
	I1216 10:32:24.890057  218168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 10:32:24.890436  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.890503  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.904708  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34377
	I1216 10:32:24.905224  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.905460  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39343
	I1216 10:32:24.905767  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.905783  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.905992  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.906194  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.906513  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.906532  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.906876  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.906916  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.908663  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46863
	I1216 10:32:24.908683  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.908900  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.917559  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.917628  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.918289  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.918332  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.918852  218168 config.go:182] Loaded profile config "addons-020871": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:32:24.918996  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44869
	I1216 10:32:24.919099  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45437
	I1216 10:32:24.919164  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I1216 10:32:24.919226  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I1216 10:32:24.919224  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.919265  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.920460  218168 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-020871"
	I1216 10:32:24.920514  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.920886  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.920923  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.922235  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.922382  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.922454  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.922511  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.922566  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.922625  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45649
	I1216 10:32:24.923385  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.923413  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.923505  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.923623  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.923641  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.923653  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.923663  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.923785  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.923790  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.923796  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.923802  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.923863  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.925013  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.925078  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.925098  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.925101  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.925081  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.925179  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.925546  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.925584  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.925609  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.925626  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.925830  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.925861  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.926354  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.926381  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.926532  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.944012  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34237
	I1216 10:32:24.944132  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45383
	I1216 10:32:24.944654  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.944657  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.945276  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.945301  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.945432  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.945454  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.945722  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.945888  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.946338  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.946395  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.946510  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.946538  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.955975  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I1216 10:32:24.956664  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.957346  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.957367  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.957792  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.957985  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.958206  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35703
	I1216 10:32:24.959254  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.960025  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.960045  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.960124  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I1216 10:32:24.960573  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37445
	I1216 10:32:24.960746  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.961148  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.961350  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.961363  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.961447  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.961521  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.961572  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.961888  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.962077  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.962110  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.962464  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.962502  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.962817  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:24.964905  218168 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 10:32:24.965924  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.966398  218168 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 10:32:24.966422  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 10:32:24.966448  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:24.966950  218168 addons.go:234] Setting addon default-storageclass=true in "addons-020871"
	I1216 10:32:24.966990  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:24.967380  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.967432  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.968757  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44815
	I1216 10:32:24.969371  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.969977  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.969995  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.970054  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.970658  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:24.970674  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.970679  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.970926  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:24.971275  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.971344  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.971660  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:24.971883  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:24.972034  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:24.972278  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.972303  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.972650  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.973169  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.973214  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.973868  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41365
	I1216 10:32:24.974379  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.974993  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.975015  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.975427  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.976028  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:24.976073  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:24.979001  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43477
	I1216 10:32:24.979616  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.980186  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.980217  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.980598  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.980771  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.982757  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:24.984458  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45287
	I1216 10:32:24.985070  218168 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 10:32:24.985166  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.985887  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.985909  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.986334  218168 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 10:32:24.986356  218168 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 10:32:24.986379  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:24.986595  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.986847  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.988329  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36515
	I1216 10:32:24.988745  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.989720  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:24.991211  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.991501  218168 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1216 10:32:24.991690  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:24.991719  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.992088  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:24.992284  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:24.992490  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:24.992682  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:24.992859  218168 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1216 10:32:24.992874  218168 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1216 10:32:24.992904  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:24.993540  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37689
	I1216 10:32:24.993982  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:24.994651  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:24.994670  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:24.995285  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:24.995680  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:24.997515  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:24.997763  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.998252  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:24.998282  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:24.998421  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:24.998592  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:24.998750  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:24.998878  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:24.999341  218168 out.go:177]   - Using image docker.io/busybox:stable
	I1216 10:32:25.000152  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43547
	I1216 10:32:25.000644  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.001349  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.001375  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.001852  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.001883  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.002248  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.002408  218168 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 10:32:25.002502  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.002721  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.003882  218168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 10:32:25.003904  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 10:32:25.003928  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.004211  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38353
	I1216 10:32:25.004820  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.005578  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.005592  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.005660  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33581
	I1216 10:32:25.006154  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.006279  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.006538  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.006822  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.006841  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.007270  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.007492  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.007549  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.007562  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.007715  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.007862  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.008054  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.008223  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.008404  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.009675  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.010214  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:25.010299  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:25.010691  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:25.011155  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:25.011198  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:25.011417  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
	I1216 10:32:25.011777  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 10:32:25.011919  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.012709  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.012729  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.012995  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 10:32:25.013008  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.013016  218168 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 10:32:25.013036  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.013182  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.013531  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.014896  218168 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1216 10:32:25.015700  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.016185  218168 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 10:32:25.016203  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 10:32:25.016226  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.016529  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.016555  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.016573  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.016614  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.017020  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.017285  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.017460  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.017963  218168 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1216 10:32:25.018090  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I1216 10:32:25.019056  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.019282  218168 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 10:32:25.019301  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1216 10:32:25.019323  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.019612  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.019843  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.019861  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.019993  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.020021  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.020122  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.020380  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.020553  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.020743  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.021533  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.021993  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.022855  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.023313  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.023332  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.023406  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.023591  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.023754  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.023917  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.025399  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.027354  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 10:32:25.028674  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 10:32:25.030046  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 10:32:25.031487  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 10:32:25.034553  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41299
	I1216 10:32:25.034816  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 10:32:25.035328  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.036074  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.036104  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.036642  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.036847  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.037849  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 10:32:25.039032  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.039266  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:25.039284  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:25.039519  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:25.039545  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:25.039552  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:25.039560  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:25.039566  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:25.040568  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46119
	I1216 10:32:25.041817  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:25.041863  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:25.041883  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	W1216 10:32:25.041998  218168 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 10:32:25.042012  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.042022  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I1216 10:32:25.042406  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 10:32:25.043038  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34423
	I1216 10:32:25.043177  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
	I1216 10:32:25.043180  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.043263  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.043309  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.043726  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.044056  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.044074  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.045129  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I1216 10:32:25.045143  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.045369  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.045591  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.045801  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.046030  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.046112  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.046127  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.046235  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.046391  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.046412  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.046646  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.046749  218168 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 10:32:25.046800  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.046851  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.048262  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 10:32:25.048282  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 10:32:25.048315  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.048406  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.048441  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.048521  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.048560  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.048575  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.048921  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.049054  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41659
	I1216 10:32:25.049132  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.050090  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.050364  218168 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1216 10:32:25.050793  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.050814  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.051205  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.051293  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.051823  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.051917  218168 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1216 10:32:25.051870  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:25.052720  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:25.053012  218168 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 10:32:25.053019  218168 out.go:177]   - Using image docker.io/registry:2.8.3
	I1216 10:32:25.053536  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.054129  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.054151  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.054365  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I1216 10:32:25.054414  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.054366  218168 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1216 10:32:25.054562  218168 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 10:32:25.054775  218168 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 10:32:25.054798  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.054665  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.054928  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.055492  218168 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 10:32:25.055508  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 10:32:25.055524  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.056043  218168 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1216 10:32:25.056058  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 10:32:25.056074  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.057159  218168 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 10:32:25.057175  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 10:32:25.057194  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.057282  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.060636  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.063911  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.063927  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.063947  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.063955  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.063979  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.063997  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.064001  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.064395  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.064678  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.065411  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.065437  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.065480  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.065792  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.065824  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.065845  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.065864  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.065881  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.065895  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.066080  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.066158  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.066252  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.066309  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.066576  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.066846  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.067532  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.067552  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.068068  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.068343  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.069188  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.071332  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.071541  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.071696  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.072607  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.074379  218168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1216 10:32:25.075670  218168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:32:25.077014  218168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:32:25.078469  218168 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 10:32:25.078528  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 10:32:25.078559  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.081670  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39273
	I1216 10:32:25.082375  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.082969  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.083010  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.083214  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.083397  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.083506  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.083614  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.083688  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:25.084493  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:25.084554  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:25.084918  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:25.085161  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:25.087113  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:25.087405  218168 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 10:32:25.087425  218168 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 10:32:25.087445  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:25.090135  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.090493  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:25.090516  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:25.090653  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:25.090828  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:25.090991  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:25.091136  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:25.333021  218168 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 10:32:25.333194  218168 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 10:32:25.467825  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 10:32:25.474263  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 10:32:25.497905  218168 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 10:32:25.497933  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1216 10:32:25.500771  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 10:32:25.513874  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 10:32:25.531484  218168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 10:32:25.531525  218168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 10:32:25.543702  218168 node_ready.go:35] waiting up to 6m0s for node "addons-020871" to be "Ready" ...
	I1216 10:32:25.547117  218168 node_ready.go:49] node "addons-020871" has status "Ready":"True"
	I1216 10:32:25.547158  218168 node_ready.go:38] duration metric: took 3.394207ms for node "addons-020871" to be "Ready" ...
	I1216 10:32:25.547173  218168 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 10:32:25.553613  218168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 10:32:25.553647  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 10:32:25.556130  218168 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-n8thf" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:25.587567  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 10:32:25.616258  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 10:32:25.616684  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 10:32:25.618001  218168 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 10:32:25.618030  218168 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 10:32:25.619368  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 10:32:25.648280  218168 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 10:32:25.648331  218168 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 10:32:25.648578  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 10:32:25.648611  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 10:32:25.691117  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 10:32:25.755514  218168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 10:32:25.755551  218168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 10:32:25.759590  218168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 10:32:25.759626  218168 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 10:32:25.795202  218168 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 10:32:25.795235  218168 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 10:32:25.890128  218168 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 10:32:25.890154  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 10:32:25.904286  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 10:32:25.904319  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 10:32:25.951937  218168 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 10:32:25.951971  218168 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 10:32:25.958349  218168 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 10:32:25.958388  218168 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 10:32:26.012457  218168 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 10:32:26.012516  218168 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 10:32:26.130613  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 10:32:26.157201  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 10:32:26.157232  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 10:32:26.225869  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 10:32:26.231315  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 10:32:26.231353  218168 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 10:32:26.253545  218168 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 10:32:26.253580  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 10:32:26.371236  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 10:32:26.371282  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 10:32:26.413295  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 10:32:26.524828  218168 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 10:32:26.524868  218168 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 10:32:26.604273  218168 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:32:26.604316  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 10:32:26.949259  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 10:32:26.949286  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 10:32:26.968163  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:32:27.200290  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 10:32:27.200333  218168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 10:32:27.459041  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 10:32:27.459065  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 10:32:27.562683  218168 pod_ready.go:103] pod "coredns-7c65d6cfc9-n8thf" in "kube-system" namespace has status "Ready":"False"
	I1216 10:32:27.696690  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 10:32:27.696723  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 10:32:27.886888  218168 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.553643166s)
	I1216 10:32:27.886942  218168 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1216 10:32:27.886945  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.419077214s)
	I1216 10:32:27.887004  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:27.887024  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:27.887457  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:27.887457  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:27.887491  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:27.887504  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:27.887511  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:27.887801  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:27.887820  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:28.152480  218168 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 10:32:28.152515  218168 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 10:32:28.409428  218168 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-020871" context rescaled to 1 replicas
	I1216 10:32:28.411554  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 10:32:29.720714  218168 pod_ready.go:103] pod "coredns-7c65d6cfc9-n8thf" in "kube-system" namespace has status "Ready":"False"
	I1216 10:32:30.135292  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.634471782s)
	I1216 10:32:30.135317  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.621405398s)
	I1216 10:32:30.135362  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.135368  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.135376  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.135381  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.135391  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.547788151s)
	I1216 10:32:30.135301  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.660990053s)
	I1216 10:32:30.135434  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.135446  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.135460  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.135477  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.135849  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.135855  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.135888  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.135899  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.135907  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.135911  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.135915  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.135891  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.135993  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136004  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136012  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.136011  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136020  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.136021  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136031  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.136039  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.136130  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136140  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136149  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.136157  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.136236  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.136265  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136272  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136351  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136361  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136418  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.136454  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.136463  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.136915  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:30.136996  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.137031  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.256774  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:30.256807  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:30.257285  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:30.257310  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:30.257314  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:31.565891  218168 pod_ready.go:93] pod "coredns-7c65d6cfc9-n8thf" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:31.565918  218168 pod_ready.go:82] duration metric: took 6.009748869s for pod "coredns-7c65d6cfc9-n8thf" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:31.565929  218168 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nqhl8" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:32.082725  218168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 10:32:32.082766  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:32.086188  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:32.086704  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:32.086737  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:32.086918  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:32.087217  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:32.087406  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:32.087616  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:32.645727  218168 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 10:32:32.760429  218168 addons.go:234] Setting addon gcp-auth=true in "addons-020871"
	I1216 10:32:32.760500  218168 host.go:66] Checking if "addons-020871" exists ...
	I1216 10:32:32.760823  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:32.760869  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:32.777269  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32861
	I1216 10:32:32.777914  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:32.778464  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:32.778486  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:32.778910  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:32.779594  218168 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:32:32.779628  218168 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:32:32.796125  218168 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
	I1216 10:32:32.796728  218168 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:32:32.797279  218168 main.go:141] libmachine: Using API Version  1
	I1216 10:32:32.797310  218168 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:32:32.797691  218168 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:32:32.797943  218168 main.go:141] libmachine: (addons-020871) Calling .GetState
	I1216 10:32:32.799661  218168 main.go:141] libmachine: (addons-020871) Calling .DriverName
	I1216 10:32:32.799916  218168 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 10:32:32.799945  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHHostname
	I1216 10:32:32.802800  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:32.803183  218168 main.go:141] libmachine: (addons-020871) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:5d:dc", ip: ""} in network mk-addons-020871: {Iface:virbr1 ExpiryTime:2024-12-16 11:31:52 +0000 UTC Type:0 Mac:52:54:00:2f:5d:dc Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:addons-020871 Clientid:01:52:54:00:2f:5d:dc}
	I1216 10:32:32.803218  218168 main.go:141] libmachine: (addons-020871) DBG | domain addons-020871 has defined IP address 192.168.39.206 and MAC address 52:54:00:2f:5d:dc in network mk-addons-020871
	I1216 10:32:32.803402  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHPort
	I1216 10:32:32.803616  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHKeyPath
	I1216 10:32:32.803784  218168 main.go:141] libmachine: (addons-020871) Calling .GetSSHUsername
	I1216 10:32:32.803968  218168 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/addons-020871/id_rsa Username:docker}
	I1216 10:32:33.521626  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.905317422s)
	I1216 10:32:33.521691  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.521689  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.904974983s)
	I1216 10:32:33.521705  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.521733  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.521752  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.521729  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.902328261s)
	I1216 10:32:33.521822  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.830679561s)
	I1216 10:32:33.521833  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.521843  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.521854  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.521864  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.521956  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.391289284s)
	I1216 10:32:33.521994  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.522012  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.522139  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.296231219s)
	I1216 10:32:33.522161  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.522170  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.522213  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.108875995s)
	I1216 10:32:33.522241  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.522257  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.522704  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.522710  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.554119846s)
	I1216 10:32:33.522744  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.522751  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.522758  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.522765  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	W1216 10:32:33.522835  218168 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 10:32:33.522880  218168 retry.go:31] will retry after 224.70682ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 10:32:33.522927  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.522946  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.522976  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.522983  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.522991  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.522999  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.523125  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.523172  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.523196  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.523254  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.523532  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.523548  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.523555  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.523561  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.523198  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.523229  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.523218  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.523930  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.523941  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.523948  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.523227  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.523178  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.523988  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.523998  218168 addons.go:475] Verifying addon metrics-server=true in "addons-020871"
	I1216 10:32:33.523613  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.523656  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.523678  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.524365  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.523698  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.524397  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.524376  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.524441  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.526476  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.526481  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.526497  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.526507  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.526516  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.526522  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.526537  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.526543  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.526552  218168 addons.go:475] Verifying addon ingress=true in "addons-020871"
	I1216 10:32:33.526817  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.526829  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.526837  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.526859  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.526866  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.526945  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.527260  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.527298  218168 addons.go:475] Verifying addon registry=true in "addons-020871"
	I1216 10:32:33.526950  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.527378  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.526968  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.526990  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.528627  218168 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-020871 service yakd-dashboard -n yakd-dashboard
	
	I1216 10:32:33.528650  218168 out.go:177] * Verifying ingress addon...
	I1216 10:32:33.528631  218168 out.go:177] * Verifying registry addon...
	I1216 10:32:33.530727  218168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 10:32:33.530727  218168 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 10:32:33.544670  218168 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 10:32:33.544691  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:33.544743  218168 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 10:32:33.544763  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:33.585912  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:33.585935  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:33.586180  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:33.586233  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:33.586244  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:33.592509  218168 pod_ready.go:103] pod "coredns-7c65d6cfc9-nqhl8" in "kube-system" namespace has status "Ready":"False"
	I1216 10:32:33.748611  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 10:32:34.037801  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:34.038799  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:34.558434  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:34.559036  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:34.877474  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.465862141s)
	I1216 10:32:34.877548  218168 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.077591739s)
	I1216 10:32:34.877577  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:34.877597  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:34.877939  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:34.877962  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:34.877971  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:34.877982  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:34.877991  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:34.878222  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:34.878281  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:34.878242  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:34.878297  218168 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-020871"
	I1216 10:32:34.879402  218168 out.go:177] * Verifying csi-hostpath-driver addon...
	I1216 10:32:34.879416  218168 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 10:32:34.881327  218168 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 10:32:34.882095  218168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 10:32:34.882496  218168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 10:32:34.882517  218168 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 10:32:34.924904  218168 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 10:32:34.924943  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:34.931853  218168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 10:32:34.931886  218168 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 10:32:34.988474  218168 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 10:32:34.988510  218168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 10:32:35.030051  218168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 10:32:35.035470  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:35.035671  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:35.387382  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:35.536061  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:35.536070  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:35.626606  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.877932859s)
	I1216 10:32:35.626675  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:35.626689  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:35.627042  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:35.627067  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:35.627079  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:35.627089  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:35.627088  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:35.627335  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:35.627357  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:35.887855  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:36.036141  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:36.036557  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:36.072508  218168 pod_ready.go:98] pod "coredns-7c65d6cfc9-nqhl8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:35 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.206 HostIPs:[{IP:192.168.39
.206}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-12-16 10:32:24 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-16 10:32:29 +0000 UTC,FinishedAt:2024-12-16 10:32:34 +0000 UTC,ContainerID:cri-o://31fc3084d77e610eafcdd7b9e9bf9c2bf827ab3aa3bbd54deb348b94d6edc242,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://31fc3084d77e610eafcdd7b9e9bf9c2bf827ab3aa3bbd54deb348b94d6edc242 Started:0xc0029236d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0028d7b50} {Name:kube-api-access-898fn MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0028d7b60}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1216 10:32:36.072540  218168 pod_ready.go:82] duration metric: took 4.506604443s for pod "coredns-7c65d6cfc9-nqhl8" in "kube-system" namespace to be "Ready" ...
	E1216 10:32:36.072550  218168 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-nqhl8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:35 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 10:32:24 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.206 HostIPs:[{IP:192.168.39.206}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-12-16 10:32:24 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-16 10:32:29 +0000 UTC,FinishedAt:2024-12-16 10:32:34 +0000 UTC,ContainerID:cri-o://31fc3084d77e610eafcdd7b9e9bf9c2bf827ab3aa3bbd54deb348b94d6edc242,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://31fc3084d77e610eafcdd7b9e9bf9c2bf827ab3aa3bbd54deb348b94d6edc242 Started:0xc0029236d0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0028d7b50} {Name:kube-api-access-898fn MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0028d7b60}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1216 10:32:36.072569  218168 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.076670  218168 pod_ready.go:93] pod "etcd-addons-020871" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:36.076694  218168 pod_ready.go:82] duration metric: took 4.116304ms for pod "etcd-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.076706  218168 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.080478  218168 pod_ready.go:93] pod "kube-apiserver-addons-020871" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:36.080498  218168 pod_ready.go:82] duration metric: took 3.785307ms for pod "kube-apiserver-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.080506  218168 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.084591  218168 pod_ready.go:93] pod "kube-controller-manager-addons-020871" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:36.084615  218168 pod_ready.go:82] duration metric: took 4.103725ms for pod "kube-controller-manager-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.084624  218168 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n22fm" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.090762  218168 pod_ready.go:93] pod "kube-proxy-n22fm" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:36.090784  218168 pod_ready.go:82] duration metric: took 6.154015ms for pod "kube-proxy-n22fm" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.090793  218168 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.318568  218168 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.288468806s)
	I1216 10:32:36.318636  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:36.318647  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:36.318940  218168 main.go:141] libmachine: (addons-020871) DBG | Closing plugin on server side
	I1216 10:32:36.318963  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:36.318974  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:36.318988  218168 main.go:141] libmachine: Making call to close driver server
	I1216 10:32:36.319003  218168 main.go:141] libmachine: (addons-020871) Calling .Close
	I1216 10:32:36.319202  218168 main.go:141] libmachine: Successfully made call to close driver server
	I1216 10:32:36.319233  218168 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 10:32:36.321941  218168 addons.go:475] Verifying addon gcp-auth=true in "addons-020871"
	I1216 10:32:36.323473  218168 out.go:177] * Verifying gcp-auth addon...
	I1216 10:32:36.325937  218168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 10:32:36.340142  218168 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 10:32:36.340168  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:36.398866  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:36.471315  218168 pod_ready.go:93] pod "kube-scheduler-addons-020871" in "kube-system" namespace has status "Ready":"True"
	I1216 10:32:36.471359  218168 pod_ready.go:82] duration metric: took 380.549574ms for pod "kube-scheduler-addons-020871" in "kube-system" namespace to be "Ready" ...
	I1216 10:32:36.471371  218168 pod_ready.go:39] duration metric: took 10.924178884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 10:32:36.471392  218168 api_server.go:52] waiting for apiserver process to appear ...
	I1216 10:32:36.471475  218168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 10:32:36.518035  218168 api_server.go:72] duration metric: took 11.636330261s to wait for apiserver process to appear ...
	I1216 10:32:36.518070  218168 api_server.go:88] waiting for apiserver healthz status ...
	I1216 10:32:36.518091  218168 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1216 10:32:36.523687  218168 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1216 10:32:36.524703  218168 api_server.go:141] control plane version: v1.31.2
	I1216 10:32:36.524727  218168 api_server.go:131] duration metric: took 6.651151ms to wait for apiserver health ...
	I1216 10:32:36.524735  218168 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 10:32:36.546772  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:36.547018  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:36.675911  218168 system_pods.go:59] 18 kube-system pods found
	I1216 10:32:36.675948  218168 system_pods.go:61] "amd-gpu-device-plugin-5mpr5" [af2c6f6b-1b17-4d42-8958-26458e2900e4] Running
	I1216 10:32:36.675953  218168 system_pods.go:61] "coredns-7c65d6cfc9-n8thf" [dc914613-3264-4abb-8a01-5194512e0048] Running
	I1216 10:32:36.675961  218168 system_pods.go:61] "csi-hostpath-attacher-0" [ade3b0d6-039c-4252-be2d-5f4ce1376484] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 10:32:36.675967  218168 system_pods.go:61] "csi-hostpath-resizer-0" [3d3fd497-6347-4f6b-8e24-bda139222416] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 10:32:36.675976  218168 system_pods.go:61] "csi-hostpathplugin-6mvv7" [562bd994-9c64-4aaf-9ebf-9e8a574500d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 10:32:36.675980  218168 system_pods.go:61] "etcd-addons-020871" [9c41352e-5426-4d0a-beec-337bdcd099e7] Running
	I1216 10:32:36.675984  218168 system_pods.go:61] "kube-apiserver-addons-020871" [c040845d-a416-4e47-8810-49f88b739d44] Running
	I1216 10:32:36.675989  218168 system_pods.go:61] "kube-controller-manager-addons-020871" [f1e17c78-352b-498b-b5a7-e74421fb61c8] Running
	I1216 10:32:36.675994  218168 system_pods.go:61] "kube-ingress-dns-minikube" [a4893ee0-36ca-4cb9-a751-c2ffdd5daf75] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 10:32:36.675998  218168 system_pods.go:61] "kube-proxy-n22fm" [963da550-43ba-48fd-8b0e-76fc08650c48] Running
	I1216 10:32:36.676003  218168 system_pods.go:61] "kube-scheduler-addons-020871" [49126848-7cf8-4f5a-acde-98a68986ee26] Running
	I1216 10:32:36.676007  218168 system_pods.go:61] "metrics-server-84c5f94fbc-lk9mr" [fe81a6d6-63fe-417e-9b7d-9047da33acbf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 10:32:36.676014  218168 system_pods.go:61] "nvidia-device-plugin-daemonset-z7nb7" [5897e921-e086-496a-8865-2c37fd8ea3bd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 10:32:36.676019  218168 system_pods.go:61] "registry-5cc95cd69-r6zm6" [80b40373-c14b-4d26-ba1f-d0eab35d8a56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 10:32:36.676024  218168 system_pods.go:61] "registry-proxy-qk5tx" [302e1efd-762f-487a-96d5-b24b982f648f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 10:32:36.676029  218168 system_pods.go:61] "snapshot-controller-56fcc65765-7cfdv" [9413102a-ad1a-4ef5-b4fa-ab7380a28148] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 10:32:36.676036  218168 system_pods.go:61] "snapshot-controller-56fcc65765-gbbxw" [080024e6-a3d1-4134-87ff-75521de39601] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 10:32:36.676039  218168 system_pods.go:61] "storage-provisioner" [f10c5c32-c4e9-4810-91a1-603c2cff9bde] Running
	I1216 10:32:36.676047  218168 system_pods.go:74] duration metric: took 151.305953ms to wait for pod list to return data ...
	I1216 10:32:36.676057  218168 default_sa.go:34] waiting for default service account to be created ...
	I1216 10:32:36.830216  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:36.870936  218168 default_sa.go:45] found service account: "default"
	I1216 10:32:36.870964  218168 default_sa.go:55] duration metric: took 194.901031ms for default service account to be created ...
	I1216 10:32:36.870974  218168 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 10:32:36.932075  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:37.037042  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:37.037287  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:37.080533  218168 system_pods.go:86] 18 kube-system pods found
	I1216 10:32:37.080568  218168 system_pods.go:89] "amd-gpu-device-plugin-5mpr5" [af2c6f6b-1b17-4d42-8958-26458e2900e4] Running
	I1216 10:32:37.080577  218168 system_pods.go:89] "coredns-7c65d6cfc9-n8thf" [dc914613-3264-4abb-8a01-5194512e0048] Running
	I1216 10:32:37.080587  218168 system_pods.go:89] "csi-hostpath-attacher-0" [ade3b0d6-039c-4252-be2d-5f4ce1376484] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1216 10:32:37.080597  218168 system_pods.go:89] "csi-hostpath-resizer-0" [3d3fd497-6347-4f6b-8e24-bda139222416] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1216 10:32:37.080607  218168 system_pods.go:89] "csi-hostpathplugin-6mvv7" [562bd994-9c64-4aaf-9ebf-9e8a574500d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1216 10:32:37.080615  218168 system_pods.go:89] "etcd-addons-020871" [9c41352e-5426-4d0a-beec-337bdcd099e7] Running
	I1216 10:32:37.080622  218168 system_pods.go:89] "kube-apiserver-addons-020871" [c040845d-a416-4e47-8810-49f88b739d44] Running
	I1216 10:32:37.080633  218168 system_pods.go:89] "kube-controller-manager-addons-020871" [f1e17c78-352b-498b-b5a7-e74421fb61c8] Running
	I1216 10:32:37.080645  218168 system_pods.go:89] "kube-ingress-dns-minikube" [a4893ee0-36ca-4cb9-a751-c2ffdd5daf75] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 10:32:37.080654  218168 system_pods.go:89] "kube-proxy-n22fm" [963da550-43ba-48fd-8b0e-76fc08650c48] Running
	I1216 10:32:37.080661  218168 system_pods.go:89] "kube-scheduler-addons-020871" [49126848-7cf8-4f5a-acde-98a68986ee26] Running
	I1216 10:32:37.080673  218168 system_pods.go:89] "metrics-server-84c5f94fbc-lk9mr" [fe81a6d6-63fe-417e-9b7d-9047da33acbf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 10:32:37.080688  218168 system_pods.go:89] "nvidia-device-plugin-daemonset-z7nb7" [5897e921-e086-496a-8865-2c37fd8ea3bd] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 10:32:37.080701  218168 system_pods.go:89] "registry-5cc95cd69-r6zm6" [80b40373-c14b-4d26-ba1f-d0eab35d8a56] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 10:32:37.080713  218168 system_pods.go:89] "registry-proxy-qk5tx" [302e1efd-762f-487a-96d5-b24b982f648f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 10:32:37.080725  218168 system_pods.go:89] "snapshot-controller-56fcc65765-7cfdv" [9413102a-ad1a-4ef5-b4fa-ab7380a28148] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 10:32:37.080739  218168 system_pods.go:89] "snapshot-controller-56fcc65765-gbbxw" [080024e6-a3d1-4134-87ff-75521de39601] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 10:32:37.080749  218168 system_pods.go:89] "storage-provisioner" [f10c5c32-c4e9-4810-91a1-603c2cff9bde] Running
	I1216 10:32:37.080762  218168 system_pods.go:126] duration metric: took 209.781016ms to wait for k8s-apps to be running ...
	I1216 10:32:37.080775  218168 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 10:32:37.080836  218168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:32:37.099748  218168 system_svc.go:56] duration metric: took 18.965052ms WaitForService to wait for kubelet
	I1216 10:32:37.099785  218168 kubeadm.go:582] duration metric: took 12.218112212s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 10:32:37.099814  218168 node_conditions.go:102] verifying NodePressure condition ...
	I1216 10:32:37.273323  218168 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 10:32:37.273362  218168 node_conditions.go:123] node cpu capacity is 2
	I1216 10:32:37.273375  218168 node_conditions.go:105] duration metric: took 173.556511ms to run NodePressure ...
	I1216 10:32:37.273389  218168 start.go:241] waiting for startup goroutines ...
	I1216 10:32:37.330378  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:37.386789  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:37.534266  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:37.535524  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:37.829546  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:37.887194  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:38.036328  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:38.036740  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:38.329894  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:38.386582  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:38.536275  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:38.536752  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:38.830684  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:38.886779  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:39.035649  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:39.035794  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:39.329892  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:39.387017  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:39.534681  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:39.535523  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:40.113833  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:40.114657  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:40.114966  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:40.115296  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:40.330007  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:40.387607  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:40.535487  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:40.536259  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:40.829711  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:40.886686  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:41.036590  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:41.037063  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:41.330399  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:41.387679  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:41.536148  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:41.536202  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:41.829082  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:41.887031  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:42.035983  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:42.036029  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:42.332985  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:42.387755  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:42.537492  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:42.537705  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:42.830367  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:42.888620  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:43.036529  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:43.037917  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:43.329508  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:43.386548  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:43.535520  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:43.536127  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:43.830321  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:43.887616  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:44.036117  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:44.036319  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:44.329983  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:44.387852  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:44.536377  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:44.537145  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:44.829975  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:44.886919  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:45.036071  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:45.036441  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:45.330270  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:45.388217  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:45.535309  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:45.537293  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:45.829984  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:45.887804  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:46.036525  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:46.036994  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:46.329944  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:46.387474  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:46.535579  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:46.536143  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:46.829848  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:46.886419  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:47.035816  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:47.036626  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:47.329234  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:47.388097  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:47.535576  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:47.535825  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:47.900283  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:47.901206  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:48.034582  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:48.035090  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:48.329770  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:48.387163  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:48.535885  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:48.536380  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:48.829718  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:48.886283  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:49.037734  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:49.037910  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:49.330307  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:49.387924  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:49.536068  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:49.536078  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:49.830135  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:49.887186  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:50.034824  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:50.035346  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:50.616533  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:50.616682  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:50.616920  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:50.617102  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:50.829362  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:50.886855  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:51.034747  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:51.035840  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:51.330178  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:51.387056  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:51.535755  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:51.536900  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:51.830558  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:51.887087  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:52.036069  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:52.036292  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:52.598060  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:52.598089  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:52.598221  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:52.598719  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:52.830252  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:52.887635  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:53.035614  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:53.035818  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:53.330109  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:53.387643  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:53.536111  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:53.536391  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:53.832841  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:53.888169  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:54.035815  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:54.036365  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:54.331019  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:54.387651  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:54.536126  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:54.536191  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:54.830887  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:54.931692  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:55.035649  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:55.036125  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:55.330061  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:55.387329  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:55.535711  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:55.537046  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:55.829600  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:55.887797  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:56.035931  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:56.035948  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:56.330806  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:56.387500  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:56.539252  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:56.539323  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:56.830707  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:56.887242  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:57.035700  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:57.037236  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:57.331439  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:57.388564  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:57.535445  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:57.535761  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:57.829929  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:57.887486  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:58.035292  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:58.035884  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:58.331158  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:58.387314  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:58.535869  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:58.536285  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:58.830842  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:58.887743  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:59.036481  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:59.036644  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:59.329562  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:59.386442  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:32:59.536407  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:32:59.536790  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:32:59.830243  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:32:59.887809  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:00.035409  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:00.035652  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:00.330860  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:00.386865  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:00.535901  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:00.536290  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:00.830375  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:00.888033  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:01.035365  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:01.035582  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:01.329686  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:01.387135  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:01.536102  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:01.536815  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:01.829419  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:01.887746  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:02.036538  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:02.036766  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:02.329216  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:02.387399  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:02.535638  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:02.536559  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:02.829869  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:02.887587  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:03.035840  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:03.035944  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:03.330930  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:03.387120  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:03.535775  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:03.536160  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:03.830271  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:03.887726  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:04.035750  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:04.036552  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:04.330417  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:04.387978  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:04.536532  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:04.536646  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:04.834685  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:04.937614  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:05.035208  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:05.037210  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:05.330457  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:05.387967  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:05.536286  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:05.536433  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:05.829436  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:05.889379  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:06.036549  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:06.037124  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:06.331154  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:06.387565  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:06.535793  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:06.536120  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:06.830203  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:06.887031  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:07.036105  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:07.036323  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:07.330634  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:07.386810  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:07.536880  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:07.537016  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:07.829321  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:07.887527  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:08.034753  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:08.034908  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:08.330824  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:08.387880  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:08.534695  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:08.536114  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:08.830305  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:08.887577  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:09.035231  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:09.036647  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:09.329783  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:09.386853  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:09.534987  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:09.535185  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:09.830687  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:09.887460  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:10.036233  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:10.036294  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:10.329677  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:10.386384  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:10.535996  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:10.536287  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:10.830314  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:10.887743  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:11.036128  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:11.037326  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:11.329916  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:11.386927  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:11.535143  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:11.535401  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:11.830347  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:11.887864  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:12.035403  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:12.035655  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:12.329956  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:12.388783  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:12.535385  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:12.537070  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:12.830538  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:12.886721  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:13.035267  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:13.036020  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:13.329937  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:13.387652  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:13.535861  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:13.537845  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:13.831262  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:13.932883  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:14.036036  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 10:33:14.036151  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:14.330819  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:14.386770  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:14.534840  218168 kapi.go:107] duration metric: took 41.004107361s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 10:33:14.535009  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:14.830003  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:14.887869  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:15.036440  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:15.330598  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:15.386491  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:15.535959  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:15.829380  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:15.887931  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:16.036099  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:16.330536  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:16.386436  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:16.535919  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:16.830551  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:16.887325  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:17.035754  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:17.330479  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:17.386718  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:17.535020  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:17.830882  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:17.887941  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:18.035369  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:18.329957  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:18.387072  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:18.535680  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:18.830376  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:18.887954  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:19.035739  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:19.330017  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:19.387797  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:19.537518  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:19.829417  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:19.887297  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:20.035209  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:20.329853  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:20.387064  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:20.537404  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:20.830193  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:20.887331  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:21.035848  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:21.329292  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:21.386952  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:21.535188  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:21.830346  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:21.887306  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:22.035931  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:22.347897  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:22.402692  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:22.536648  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:23.066701  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:23.068565  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:23.069560  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:23.342543  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:23.445525  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:23.544494  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:23.829417  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:23.888911  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:24.036682  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:24.338526  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:24.436478  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:24.536371  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:24.830941  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:24.889040  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:25.046831  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:25.339416  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:25.388019  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:25.535557  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:25.829703  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:25.887879  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:26.038458  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:26.330328  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:26.390241  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:26.537659  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:26.830630  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:26.886266  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:27.036020  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:27.331559  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:27.386596  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:27.535713  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:27.830347  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:27.887126  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:28.035583  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:28.330890  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:28.386503  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:28.537823  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:28.830322  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:28.887734  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:29.461220  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:29.462333  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:29.465523  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:29.556294  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:29.830115  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:29.887594  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:30.035551  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:30.332343  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:30.387077  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:30.540328  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:30.829813  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:30.887370  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:31.039561  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:31.331982  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:31.389554  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:31.536446  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:31.829591  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:31.887320  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:32.035813  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:32.329906  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:32.387404  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:32.535456  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:32.831016  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:32.888230  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:33.034860  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:33.330404  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:33.386830  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:33.541133  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:33.830270  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:33.888542  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:34.036805  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:34.329696  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:34.386572  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:34.536117  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:34.830238  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:34.886905  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:35.035310  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:35.330330  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:35.388368  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:35.538994  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:35.829870  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:35.887462  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:36.035194  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:36.330070  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:36.400227  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:36.535785  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:36.830446  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:36.886451  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:37.040934  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:37.330140  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:37.387314  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 10:33:37.535631  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:37.830075  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:37.932312  218168 kapi.go:107] duration metric: took 1m3.050214546s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 10:33:38.036092  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:38.329868  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:38.536465  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:38.830454  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:39.035496  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:39.330159  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:39.536766  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:39.830697  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:40.037326  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:40.332299  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:40.535191  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:40.830799  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:41.102276  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:41.329281  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:41.535324  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:41.830067  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:42.035160  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:42.330128  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:42.535174  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:42.831898  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:43.250624  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:43.330276  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:43.535618  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:43.830087  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:44.036308  218168 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 10:33:44.338625  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:44.537149  218168 kapi.go:107] duration metric: took 1m11.006418061s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 10:33:44.831214  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:45.330333  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:45.830019  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:46.329410  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:46.831549  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:47.335062  218168 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 10:33:47.831204  218168 kapi.go:107] duration metric: took 1m11.505269435s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 10:33:47.834114  218168 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-020871 cluster.
	I1216 10:33:47.835432  218168 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 10:33:47.836733  218168 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 10:33:47.837932  218168 out.go:177] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, metrics-server, ingress-dns, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1216 10:33:47.839248  218168 addons.go:510] duration metric: took 1m22.957522658s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner nvidia-device-plugin cloud-spanner storage-provisioner-rancher metrics-server ingress-dns inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1216 10:33:47.839287  218168 start.go:246] waiting for cluster config update ...
	I1216 10:33:47.839312  218168 start.go:255] writing updated cluster config ...
	I1216 10:33:47.839625  218168 ssh_runner.go:195] Run: rm -f paused
	I1216 10:33:47.891556  218168 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1216 10:33:47.893368  218168 out.go:177] * Done! kubectl is now configured to use "addons-020871" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.052431147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345629052388917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=580605e7-8623-4103-bf8a-6e553b40794b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.057144726Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3888c3de-b735-4ceb-878d-629fdf773ead name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.057234217Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3888c3de-b735-4ceb-878d-629fdf773ead name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.057776096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ccadf6145dc3bb033c2a4cf71a0e8ebf3b703d87f95c29674f6bb06b50f123f,PodSandboxId:ceafd91da58086cb3d0905eec54768247df611af7e4edb8e81ba121d6891bb5e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1734345421402983820,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ckdlt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e874c3a-c63a-447f-b3f3-1a25182e6e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e81550007ffca59cb90023d3a841cff89d7da6ce38969537258c163a8f348f7,PodSandboxId:be62c474518c58e88907df6aa4a41b08f52bcae3de431c69898ba171788dc081,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734345282010385018,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a92e7de-8018-453e-9698-b51f8a038f3a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bb8b76bd3dade77077c6898235f291741d4426a332d2824fe0e0b886242920,PodSandboxId:ea109ac67ead1b8aad4c0bf488f0b24bc55e4b272274734831daf7b4d284df94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734345231080299961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8dbdf275-6cac-47d9-a
5f1-d03fff3bb404,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7cf5993f5adc58e7cb62f9fad07598cc030b3817a95bb2022871b613088447,PodSandboxId:8d42d70ae34d0c58a47a91721f96b845ed1d83eb39005e1c4cf52c3ea544d286,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734345184790464453,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-lk9mr,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: fe81a6d6-63fe-417e-9b7d-9047da33acbf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a88ef18bd59c12fe8d8126f9052512d6ebc875d7a9b6fbb23dfede1f07f16c7,PodSandboxId:c34856c7d42d737188108ee34b0e8af158da2ae7bfc627623f79bad622b1e413,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734345155347414546,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5mpr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2c6f6b-1b17-4d42-8958-26458e2900e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13387074acfc8da98881a56f788a74f50b35d7b15a8a577aadf6fbd2ec6f5bc,PodSandboxId:b6bf6b720488dd49e1d51dbc47f5e7e165ba61e1399039f9a36ce84aa3fe6b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734345151575789575,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f10c5c32-c4e9-4810-91a1-603c2cff9bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad21e51d9468859792eca6c6c12a6b98daa214025f3a86eb245d9983fb3f7f87,PodSandboxId:8a1ea18f44a1e99771a2a3465afc78b6a2ba06f97b77af59ebf47e8a5d2201d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734345148129253008,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8thf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc914613-3264-4abb-8a01-5194512e0048,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7ddbc1370793d8ad1ad8944388f73158a264ea36736a0363d5182157769b25,PodSandboxId:5ab34b9cea28b9b97707e833d155151a6ae2ee22867fe8be5f5239850bbe3343,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734345145743338549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n22fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 963da550-43ba-48fd-8b0e-76fc08650c48,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f038fc8e06f16345e0d02b668a881e146e6df0a49bcf18d1d780a3e10228a2b,PodSandboxId:3c727e0b159b2aee986f6ce329480f535a0e916c3e461e2c708f79ce77eb3817,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734345134279958825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeed55f4344b1bf03925a77c9c485375,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706427e1fde240f1832ff7654149b844a15d1e088423c1a75ed22d9c2674fd2a,PodSandboxId:054f89d61daae9d1ba33f7e69994ef287decf84dd7eaf00093ae0285c0c63396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734345134276277718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8538ef128b74f57b4a6873d0dd11ee1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39e44b7374d0a0030f2b9f2558e0d43d472aa4af29731b9052197cee54e71e12,PodSandboxId:bf7bed144b22198ecadae4db98844dc97f8ded9d3e7f85e4418bb64e24403118,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734345134270975940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5986cac6d125a4abfa620919bf8afc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c92f4c339793a5d45c1c3831f7ef86d9bc7581b6c98dc51bac41824ba98ea,PodSandboxId:488f268bc1ec01d9551bed7d8229a6890ac8743348d74006b57de173c97818db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1734345134223155861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac9d63d2c2dcc1b75f2fa8a1272267c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3888c3de-b735-4ceb-878d-629fdf773ead name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.099756110Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=91d31b86-b67a-4814-9346-7a6192adbf70 name=/runtime.v1.RuntimeService/Version
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.099880549Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=91d31b86-b67a-4814-9346-7a6192adbf70 name=/runtime.v1.RuntimeService/Version
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.101873536Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c4b9c222-bcb0-44fb-8b9f-b222b072adbc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.103793950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345629103757412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c4b9c222-bcb0-44fb-8b9f-b222b072adbc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.104740966Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de2f6122-c234-42a4-8a77-83fdaa02ec87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.104816138Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de2f6122-c234-42a4-8a77-83fdaa02ec87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.105072612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ccadf6145dc3bb033c2a4cf71a0e8ebf3b703d87f95c29674f6bb06b50f123f,PodSandboxId:ceafd91da58086cb3d0905eec54768247df611af7e4edb8e81ba121d6891bb5e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1734345421402983820,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ckdlt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e874c3a-c63a-447f-b3f3-1a25182e6e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e81550007ffca59cb90023d3a841cff89d7da6ce38969537258c163a8f348f7,PodSandboxId:be62c474518c58e88907df6aa4a41b08f52bcae3de431c69898ba171788dc081,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734345282010385018,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a92e7de-8018-453e-9698-b51f8a038f3a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bb8b76bd3dade77077c6898235f291741d4426a332d2824fe0e0b886242920,PodSandboxId:ea109ac67ead1b8aad4c0bf488f0b24bc55e4b272274734831daf7b4d284df94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734345231080299961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8dbdf275-6cac-47d9-a
5f1-d03fff3bb404,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7cf5993f5adc58e7cb62f9fad07598cc030b3817a95bb2022871b613088447,PodSandboxId:8d42d70ae34d0c58a47a91721f96b845ed1d83eb39005e1c4cf52c3ea544d286,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734345184790464453,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-lk9mr,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: fe81a6d6-63fe-417e-9b7d-9047da33acbf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a88ef18bd59c12fe8d8126f9052512d6ebc875d7a9b6fbb23dfede1f07f16c7,PodSandboxId:c34856c7d42d737188108ee34b0e8af158da2ae7bfc627623f79bad622b1e413,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734345155347414546,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5mpr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2c6f6b-1b17-4d42-8958-26458e2900e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13387074acfc8da98881a56f788a74f50b35d7b15a8a577aadf6fbd2ec6f5bc,PodSandboxId:b6bf6b720488dd49e1d51dbc47f5e7e165ba61e1399039f9a36ce84aa3fe6b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734345151575789575,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f10c5c32-c4e9-4810-91a1-603c2cff9bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad21e51d9468859792eca6c6c12a6b98daa214025f3a86eb245d9983fb3f7f87,PodSandboxId:8a1ea18f44a1e99771a2a3465afc78b6a2ba06f97b77af59ebf47e8a5d2201d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734345148129253008,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8thf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc914613-3264-4abb-8a01-5194512e0048,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7ddbc1370793d8ad1ad8944388f73158a264ea36736a0363d5182157769b25,PodSandboxId:5ab34b9cea28b9b97707e833d155151a6ae2ee22867fe8be5f5239850bbe3343,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734345145743338549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n22fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 963da550-43ba-48fd-8b0e-76fc08650c48,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f038fc8e06f16345e0d02b668a881e146e6df0a49bcf18d1d780a3e10228a2b,PodSandboxId:3c727e0b159b2aee986f6ce329480f535a0e916c3e461e2c708f79ce77eb3817,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734345134279958825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeed55f4344b1bf03925a77c9c485375,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706427e1fde240f1832ff7654149b844a15d1e088423c1a75ed22d9c2674fd2a,PodSandboxId:054f89d61daae9d1ba33f7e69994ef287decf84dd7eaf00093ae0285c0c63396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734345134276277718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8538ef128b74f57b4a6873d0dd11ee1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39e44b7374d0a0030f2b9f2558e0d43d472aa4af29731b9052197cee54e71e12,PodSandboxId:bf7bed144b22198ecadae4db98844dc97f8ded9d3e7f85e4418bb64e24403118,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734345134270975940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5986cac6d125a4abfa620919bf8afc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c92f4c339793a5d45c1c3831f7ef86d9bc7581b6c98dc51bac41824ba98ea,PodSandboxId:488f268bc1ec01d9551bed7d8229a6890ac8743348d74006b57de173c97818db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1734345134223155861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac9d63d2c2dcc1b75f2fa8a1272267c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de2f6122-c234-42a4-8a77-83fdaa02ec87 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.146457698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0cbb8f95-ab62-4730-9eb9-997c0a1c03b6 name=/runtime.v1.RuntimeService/Version
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.146548854Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0cbb8f95-ab62-4730-9eb9-997c0a1c03b6 name=/runtime.v1.RuntimeService/Version
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.148081981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17617428-182e-425e-b94e-b2e9027486b2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.149337636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345629149311820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17617428-182e-425e-b94e-b2e9027486b2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.150096699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d975911e-4f0a-4bec-b6e0-9653db570da6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.150177838Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d975911e-4f0a-4bec-b6e0-9653db570da6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.150521670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ccadf6145dc3bb033c2a4cf71a0e8ebf3b703d87f95c29674f6bb06b50f123f,PodSandboxId:ceafd91da58086cb3d0905eec54768247df611af7e4edb8e81ba121d6891bb5e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1734345421402983820,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ckdlt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e874c3a-c63a-447f-b3f3-1a25182e6e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e81550007ffca59cb90023d3a841cff89d7da6ce38969537258c163a8f348f7,PodSandboxId:be62c474518c58e88907df6aa4a41b08f52bcae3de431c69898ba171788dc081,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734345282010385018,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a92e7de-8018-453e-9698-b51f8a038f3a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bb8b76bd3dade77077c6898235f291741d4426a332d2824fe0e0b886242920,PodSandboxId:ea109ac67ead1b8aad4c0bf488f0b24bc55e4b272274734831daf7b4d284df94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734345231080299961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8dbdf275-6cac-47d9-a
5f1-d03fff3bb404,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7cf5993f5adc58e7cb62f9fad07598cc030b3817a95bb2022871b613088447,PodSandboxId:8d42d70ae34d0c58a47a91721f96b845ed1d83eb39005e1c4cf52c3ea544d286,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734345184790464453,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-lk9mr,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: fe81a6d6-63fe-417e-9b7d-9047da33acbf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a88ef18bd59c12fe8d8126f9052512d6ebc875d7a9b6fbb23dfede1f07f16c7,PodSandboxId:c34856c7d42d737188108ee34b0e8af158da2ae7bfc627623f79bad622b1e413,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734345155347414546,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5mpr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2c6f6b-1b17-4d42-8958-26458e2900e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13387074acfc8da98881a56f788a74f50b35d7b15a8a577aadf6fbd2ec6f5bc,PodSandboxId:b6bf6b720488dd49e1d51dbc47f5e7e165ba61e1399039f9a36ce84aa3fe6b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734345151575789575,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f10c5c32-c4e9-4810-91a1-603c2cff9bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad21e51d9468859792eca6c6c12a6b98daa214025f3a86eb245d9983fb3f7f87,PodSandboxId:8a1ea18f44a1e99771a2a3465afc78b6a2ba06f97b77af59ebf47e8a5d2201d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734345148129253008,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8thf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc914613-3264-4abb-8a01-5194512e0048,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7ddbc1370793d8ad1ad8944388f73158a264ea36736a0363d5182157769b25,PodSandboxId:5ab34b9cea28b9b97707e833d155151a6ae2ee22867fe8be5f5239850bbe3343,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734345145743338549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n22fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 963da550-43ba-48fd-8b0e-76fc08650c48,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f038fc8e06f16345e0d02b668a881e146e6df0a49bcf18d1d780a3e10228a2b,PodSandboxId:3c727e0b159b2aee986f6ce329480f535a0e916c3e461e2c708f79ce77eb3817,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734345134279958825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeed55f4344b1bf03925a77c9c485375,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706427e1fde240f1832ff7654149b844a15d1e088423c1a75ed22d9c2674fd2a,PodSandboxId:054f89d61daae9d1ba33f7e69994ef287decf84dd7eaf00093ae0285c0c63396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734345134276277718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8538ef128b74f57b4a6873d0dd11ee1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39e44b7374d0a0030f2b9f2558e0d43d472aa4af29731b9052197cee54e71e12,PodSandboxId:bf7bed144b22198ecadae4db98844dc97f8ded9d3e7f85e4418bb64e24403118,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734345134270975940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5986cac6d125a4abfa620919bf8afc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c92f4c339793a5d45c1c3831f7ef86d9bc7581b6c98dc51bac41824ba98ea,PodSandboxId:488f268bc1ec01d9551bed7d8229a6890ac8743348d74006b57de173c97818db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1734345134223155861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac9d63d2c2dcc1b75f2fa8a1272267c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d975911e-4f0a-4bec-b6e0-9653db570da6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.191138455Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e761235c-1ef1-4575-afae-484f789a5abb name=/runtime.v1.RuntimeService/Version
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.191213043Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e761235c-1ef1-4575-afae-484f789a5abb name=/runtime.v1.RuntimeService/Version
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.192937682Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3724e48e-a568-4e83-aa99-eb0c9c924445 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.194347471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345629194310887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3724e48e-a568-4e83-aa99-eb0c9c924445 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.195189787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ef740d7-1b2d-48f5-ad53-0f14bd8cfad7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.195276588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ef740d7-1b2d-48f5-ad53-0f14bd8cfad7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 10:40:29 addons-020871 crio[661]: time="2024-12-16 10:40:29.195655184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3ccadf6145dc3bb033c2a4cf71a0e8ebf3b703d87f95c29674f6bb06b50f123f,PodSandboxId:ceafd91da58086cb3d0905eec54768247df611af7e4edb8e81ba121d6891bb5e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1734345421402983820,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-ckdlt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e874c3a-c63a-447f-b3f3-1a25182e6e5f,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e81550007ffca59cb90023d3a841cff89d7da6ce38969537258c163a8f348f7,PodSandboxId:be62c474518c58e88907df6aa4a41b08f52bcae3de431c69898ba171788dc081,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734345282010385018,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4a92e7de-8018-453e-9698-b51f8a038f3a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7bb8b76bd3dade77077c6898235f291741d4426a332d2824fe0e0b886242920,PodSandboxId:ea109ac67ead1b8aad4c0bf488f0b24bc55e4b272274734831daf7b4d284df94,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734345231080299961,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8dbdf275-6cac-47d9-a
5f1-d03fff3bb404,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7cf5993f5adc58e7cb62f9fad07598cc030b3817a95bb2022871b613088447,PodSandboxId:8d42d70ae34d0c58a47a91721f96b845ed1d83eb39005e1c4cf52c3ea544d286,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1734345184790464453,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-lk9mr,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: fe81a6d6-63fe-417e-9b7d-9047da33acbf,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a88ef18bd59c12fe8d8126f9052512d6ebc875d7a9b6fbb23dfede1f07f16c7,PodSandboxId:c34856c7d42d737188108ee34b0e8af158da2ae7bfc627623f79bad622b1e413,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734345155347414546,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5mpr5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af2c6f6b-1b17-4d42-8958-26458e2900e4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f13387074acfc8da98881a56f788a74f50b35d7b15a8a577aadf6fbd2ec6f5bc,PodSandboxId:b6bf6b720488dd49e1d51dbc47f5e7e165ba61e1399039f9a36ce84aa3fe6b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734345151575789575,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f10c5c32-c4e9-4810-91a1-603c2cff9bde,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad21e51d9468859792eca6c6c12a6b98daa214025f3a86eb245d9983fb3f7f87,PodSandboxId:8a1ea18f44a1e99771a2a3465afc78b6a2ba06f97b77af59ebf47e8a5d2201d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734345148129253008,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-n8thf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc914613-3264-4abb-8a01-5194512e0048,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d7ddbc1370793d8ad1ad8944388f73158a264ea36736a0363d5182157769b25,PodSandboxId:5ab34b9cea28b9b97707e833d155151a6ae2ee22867fe8be5f5239850bbe3343,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1734345145743338549,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n22fm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 963da550-43ba-48fd-8b0e-76fc08650c48,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f038fc8e06f16345e0d02b668a881e146e6df0a49bcf18d1d780a3e10228a2b,PodSandboxId:3c727e0b159b2aee986f6ce329480f535a0e916c3e461e2c708f79ce77eb3817,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26
915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1734345134279958825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aeed55f4344b1bf03925a77c9c485375,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706427e1fde240f1832ff7654149b844a15d1e088423c1a75ed22d9c2674fd2a,PodSandboxId:054f89d61daae9d1ba33f7e69994ef287decf84dd7eaf00093ae0285c0c63396,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f854
5ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1734345134276277718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8538ef128b74f57b4a6873d0dd11ee1e,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39e44b7374d0a0030f2b9f2558e0d43d472aa4af29731b9052197cee54e71e12,PodSandboxId:bf7bed144b22198ecadae4db98844dc97f8ded9d3e7f85e4418bb64e24403118,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530
fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1734345134270975940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e5986cac6d125a4abfa620919bf8afc,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:876c92f4c339793a5d45c1c3831f7ef86d9bc7581b6c98dc51bac41824ba98ea,PodSandboxId:488f268bc1ec01d9551bed7d8229a6890ac8743348d74006b57de173c97818db,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a491
73,State:CONTAINER_RUNNING,CreatedAt:1734345134223155861,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-020871,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ac9d63d2c2dcc1b75f2fa8a1272267c,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ef740d7-1b2d-48f5-ad53-0f14bd8cfad7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3ccadf6145dc3       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   ceafd91da5808       hello-world-app-55bf9c44b4-ckdlt
	9e81550007ffc       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         5 minutes ago       Running             nginx                     0                   be62c474518c5       nginx
	c7bb8b76bd3da       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   ea109ac67ead1       busybox
	3d7cf5993f5ad       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   8d42d70ae34d0       metrics-server-84c5f94fbc-lk9mr
	7a88ef18bd59c       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                7 minutes ago       Running             amd-gpu-device-plugin     0                   c34856c7d42d7       amd-gpu-device-plugin-5mpr5
	f13387074acfc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   b6bf6b720488d       storage-provisioner
	ad21e51d94688       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        8 minutes ago       Running             coredns                   0                   8a1ea18f44a1e       coredns-7c65d6cfc9-n8thf
	6d7ddbc137079       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        8 minutes ago       Running             kube-proxy                0                   5ab34b9cea28b       kube-proxy-n22fm
	2f038fc8e06f1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        8 minutes ago       Running             etcd                      0                   3c727e0b159b2       etcd-addons-020871
	706427e1fde24       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        8 minutes ago       Running             kube-controller-manager   0                   054f89d61daae       kube-controller-manager-addons-020871
	39e44b7374d0a       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        8 minutes ago       Running             kube-scheduler            0                   bf7bed144b221       kube-scheduler-addons-020871
	876c92f4c3397       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        8 minutes ago       Running             kube-apiserver            0                   488f268bc1ec0       kube-apiserver-addons-020871
	
	
	==> coredns [ad21e51d9468859792eca6c6c12a6b98daa214025f3a86eb245d9983fb3f7f87] <==
	[INFO] 10.244.0.22:54874 - 2154 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0000352s
	[INFO] 10.244.0.22:40395 - 3956 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00008337s
	[INFO] 10.244.0.22:40395 - 39759 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069233s
	[INFO] 10.244.0.22:54874 - 12136 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031875s
	[INFO] 10.244.0.22:40395 - 37518 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054567s
	[INFO] 10.244.0.22:54874 - 64675 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000027233s
	[INFO] 10.244.0.22:40395 - 21090 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061514s
	[INFO] 10.244.0.22:54874 - 27881 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028762s
	[INFO] 10.244.0.22:40395 - 9585 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000079278s
	[INFO] 10.244.0.22:54874 - 40297 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048019s
	[INFO] 10.244.0.22:54874 - 21196 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000032701s
	[INFO] 10.244.0.22:33957 - 44591 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000155145s
	[INFO] 10.244.0.22:39725 - 60415 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000538103s
	[INFO] 10.244.0.22:33957 - 17787 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000087503s
	[INFO] 10.244.0.22:33957 - 13791 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000097898s
	[INFO] 10.244.0.22:33957 - 24554 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000119858s
	[INFO] 10.244.0.22:33957 - 33676 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047417s
	[INFO] 10.244.0.22:39725 - 62330 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00012313s
	[INFO] 10.244.0.22:39725 - 41682 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00011625s
	[INFO] 10.244.0.22:39725 - 64245 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000099628s
	[INFO] 10.244.0.22:39725 - 47470 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053983s
	[INFO] 10.244.0.22:33957 - 26856 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000663246s
	[INFO] 10.244.0.22:39725 - 50154 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000119351s
	[INFO] 10.244.0.22:33957 - 52849 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057197s
	[INFO] 10.244.0.22:39725 - 15127 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004793s
	
	
	==> describe nodes <==
	Name:               addons-020871
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-020871
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8
	                    minikube.k8s.io/name=addons-020871
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T10_32_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-020871
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 10:32:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-020871
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 10:40:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 10:37:26 +0000   Mon, 16 Dec 2024 10:32:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 10:37:26 +0000   Mon, 16 Dec 2024 10:32:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 10:37:26 +0000   Mon, 16 Dec 2024 10:32:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 10:37:26 +0000   Mon, 16 Dec 2024 10:32:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.206
	  Hostname:    addons-020871
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 549475808f404566b3d2b6af38f9bae5
	  System UUID:                54947580-8f40-4566-b3d2-b6af38f9bae5
	  Boot ID:                    420edaf2-6fbd-459e-9928-8db34caeabe6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  default                     hello-world-app-55bf9c44b4-ckdlt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 amd-gpu-device-plugin-5mpr5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kube-system                 coredns-7c65d6cfc9-n8thf                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m5s
	  kube-system                 etcd-addons-020871                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m10s
	  kube-system                 kube-apiserver-addons-020871             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-controller-manager-addons-020871    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-proxy-n22fm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m5s
	  kube-system                 kube-scheduler-addons-020871             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 metrics-server-84c5f94fbc-lk9mr          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m59s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m2s   kube-proxy       
	  Normal  Starting                 8m10s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m10s  kubelet          Node addons-020871 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m10s  kubelet          Node addons-020871 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m10s  kubelet          Node addons-020871 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m9s   kubelet          Node addons-020871 status is now: NodeReady
	  Normal  RegisteredNode           8m6s   node-controller  Node addons-020871 event: Registered Node addons-020871 in Controller
	
	
	==> dmesg <==
	[  +5.982553] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.077436] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.841575] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +0.152148] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.210190] kauditd_printk_skb: 117 callbacks suppressed
	[  +5.021233] kauditd_printk_skb: 152 callbacks suppressed
	[  +7.051390] kauditd_printk_skb: 67 callbacks suppressed
	[Dec16 10:33] kauditd_printk_skb: 2 callbacks suppressed
	[ +11.426656] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.833011] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.054636] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.086667] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.075112] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.130070] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.841732] kauditd_printk_skb: 7 callbacks suppressed
	[Dec16 10:34] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.153240] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.653133] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.133504] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.092868] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.358953] kauditd_printk_skb: 16 callbacks suppressed
	[ +10.377541] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 10:35] kauditd_printk_skb: 21 callbacks suppressed
	[ +31.813058] kauditd_printk_skb: 76 callbacks suppressed
	[Dec16 10:37] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [2f038fc8e06f16345e0d02b668a881e146e6df0a49bcf18d1d780a3e10228a2b] <==
	{"level":"warn","ts":"2024-12-16T10:33:29.440814Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T10:33:28.923583Z","time spent":"517.183211ms","remote":"127.0.0.1:38202","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4457,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" mod_revision:703 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" value_size:4391 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" > >"}
	{"level":"warn","ts":"2024-12-16T10:33:29.442100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.761478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-12-16T10:33:29.442151Z","caller":"traceutil/trace.go:171","msg":"trace[1703560266] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1010; }","duration":"176.812985ms","start":"2024-12-16T10:33:29.265327Z","end":"2024-12-16T10:33:29.442140Z","steps":["trace[1703560266] 'agreement among raft nodes before linearized reading'  (duration: 176.697202ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:29.442357Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.475767ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:29.442394Z","caller":"traceutil/trace.go:171","msg":"trace[783107619] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1010; }","duration":"123.51582ms","start":"2024-12-16T10:33:29.318872Z","end":"2024-12-16T10:33:29.442388Z","steps":["trace[783107619] 'agreement among raft nodes before linearized reading'  (duration: 123.469098ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:29.442466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.016884ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:29.442497Z","caller":"traceutil/trace.go:171","msg":"trace[390738922] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1010; }","duration":"139.050916ms","start":"2024-12-16T10:33:29.303440Z","end":"2024-12-16T10:33:29.442491Z","steps":["trace[390738922] 'agreement among raft nodes before linearized reading'  (duration: 139.011749ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:33:41.087723Z","caller":"traceutil/trace.go:171","msg":"trace[1465388956] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"113.670692ms","start":"2024-12-16T10:33:40.974034Z","end":"2024-12-16T10:33:41.087705Z","steps":["trace[1465388956] 'process raft request'  (duration: 113.307218ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:43.237535Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.02215ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:43.237681Z","caller":"traceutil/trace.go:171","msg":"trace[783137280] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1090; }","duration":"224.186702ms","start":"2024-12-16T10:33:43.013481Z","end":"2024-12-16T10:33:43.237668Z","steps":["trace[783137280] 'range keys from in-memory index tree'  (duration: 223.920529ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:33:43.237676Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.908229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T10:33:43.237804Z","caller":"traceutil/trace.go:171","msg":"trace[835467499] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1090; }","duration":"215.043181ms","start":"2024-12-16T10:33:43.022752Z","end":"2024-12-16T10:33:43.237795Z","steps":["trace[835467499] 'range keys from in-memory index tree'  (duration: 214.86284ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:34:14.572715Z","caller":"traceutil/trace.go:171","msg":"trace[1147357303] transaction","detail":"{read_only:false; response_revision:1253; number_of_response:1; }","duration":"178.210654ms","start":"2024-12-16T10:34:14.394489Z","end":"2024-12-16T10:34:14.572700Z","steps":["trace[1147357303] 'process raft request'  (duration: 178.029069ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:34:24.028858Z","caller":"traceutil/trace.go:171","msg":"trace[1410742977] linearizableReadLoop","detail":"{readStateIndex:1333; appliedIndex:1332; }","duration":"253.650344ms","start":"2024-12-16T10:34:23.775193Z","end":"2024-12-16T10:34:24.028843Z","steps":["trace[1410742977] 'read index received'  (duration: 253.537438ms)","trace[1410742977] 'applied index is now lower than readState.Index'  (duration: 112.394µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T10:34:24.028949Z","caller":"traceutil/trace.go:171","msg":"trace[694862068] transaction","detail":"{read_only:false; response_revision:1294; number_of_response:1; }","duration":"436.23531ms","start":"2024-12-16T10:34:23.592708Z","end":"2024-12-16T10:34:24.028943Z","steps":["trace[694862068] 'process raft request'  (duration: 436.031154ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:34:24.029030Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T10:34:23.592689Z","time spent":"436.278761ms","remote":"127.0.0.1:38216","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1285 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2024-12-16T10:34:24.029189Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.905067ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:1 size:3395"}
	{"level":"info","ts":"2024-12-16T10:34:24.029228Z","caller":"traceutil/trace.go:171","msg":"trace[850312144] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:1; response_revision:1294; }","duration":"251.950447ms","start":"2024-12-16T10:34:23.777269Z","end":"2024-12-16T10:34:24.029219Z","steps":["trace[850312144] 'agreement among raft nodes before linearized reading'  (duration: 251.837753ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:34:24.029406Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.365448ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:883"}
	{"level":"info","ts":"2024-12-16T10:34:24.029437Z","caller":"traceutil/trace.go:171","msg":"trace[1343194805] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:1294; }","duration":"192.398792ms","start":"2024-12-16T10:34:23.837032Z","end":"2024-12-16T10:34:24.029431Z","steps":["trace[1343194805] 'agreement among raft nodes before linearized reading'  (duration: 192.316861ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:34:24.029559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"254.364555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-12-16T10:34:24.029573Z","caller":"traceutil/trace.go:171","msg":"trace[1084207642] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1294; }","duration":"254.380329ms","start":"2024-12-16T10:34:23.775188Z","end":"2024-12-16T10:34:24.029569Z","steps":["trace[1084207642] 'agreement among raft nodes before linearized reading'  (duration: 254.330675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T10:34:24.029710Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.656876ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-12-16T10:34:24.029782Z","caller":"traceutil/trace.go:171","msg":"trace[1860626495] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1294; }","duration":"237.733849ms","start":"2024-12-16T10:34:23.792040Z","end":"2024-12-16T10:34:24.029774Z","steps":["trace[1860626495] 'agreement among raft nodes before linearized reading'  (duration: 237.598653ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T10:34:36.364177Z","caller":"traceutil/trace.go:171","msg":"trace[1382503755] transaction","detail":"{read_only:false; response_revision:1408; number_of_response:1; }","duration":"227.479933ms","start":"2024-12-16T10:34:36.136669Z","end":"2024-12-16T10:34:36.364149Z","steps":["trace[1382503755] 'process raft request'  (duration: 227.174049ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:40:29 up 8 min,  0 users,  load average: 0.00, 0.45, 0.40
	Linux addons-020871 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [876c92f4c339793a5d45c1c3831f7ef86d9bc7581b6c98dc51bac41824ba98ea] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1216 10:34:16.047756       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 10:34:16.068219       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1216 10:34:17.865543       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.191.184"}
	I1216 10:34:37.699142       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 10:34:37.907516       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.126.117"}
	I1216 10:34:41.761675       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1216 10:34:42.842372       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1216 10:34:45.573760       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1216 10:35:03.809948       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:35:03.809986       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:35:03.870589       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:35:03.870671       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:35:03.884343       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:35:03.884374       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 10:35:03.894392       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 10:35:03.896354       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1216 10:35:04.193382       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W1216 10:35:04.885375       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1216 10:35:05.140330       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1216 10:35:05.140643       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	E1216 10:35:19.570109       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1216 10:36:59.002287       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.53.75"}
	
	
	==> kube-controller-manager [706427e1fde240f1832ff7654149b844a15d1e088423c1a75ed22d9c2674fd2a] <==
	E1216 10:38:02.958766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:38:23.224819       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:38:23.224931       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:38:35.638390       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:38:35.638441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:38:37.311272       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:38:37.311307       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:38:38.538792       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:38:38.538897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:09.162917       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:09.163076       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:11.926979       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:11.927100       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:19.460090       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:19.460140       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:21.664254       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:21.664312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:54.959854       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:54.959967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:39:57.746888       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:39:57.746988       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:40:03.773469       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:40:03.773585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 10:40:16.781086       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 10:40:16.781147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [6d7ddbc1370793d8ad1ad8944388f73158a264ea36736a0363d5182157769b25] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1216 10:32:26.436733       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1216 10:32:26.453476       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.206"]
	E1216 10:32:26.453523       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 10:32:26.557943       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1216 10:32:26.558020       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 10:32:26.558045       1 server_linux.go:169] "Using iptables Proxier"
	I1216 10:32:26.562854       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 10:32:26.563130       1 server.go:483] "Version info" version="v1.31.2"
	I1216 10:32:26.563142       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 10:32:26.564759       1 config.go:199] "Starting service config controller"
	I1216 10:32:26.564769       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 10:32:26.564785       1 config.go:105] "Starting endpoint slice config controller"
	I1216 10:32:26.564789       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 10:32:26.565128       1 config.go:328] "Starting node config controller"
	I1216 10:32:26.565135       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 10:32:26.665708       1 shared_informer.go:320] Caches are synced for node config
	I1216 10:32:26.665759       1 shared_informer.go:320] Caches are synced for service config
	I1216 10:32:26.665781       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [39e44b7374d0a0030f2b9f2558e0d43d472aa4af29731b9052197cee54e71e12] <==
	W1216 10:32:17.710859       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1216 10:32:17.710997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.716425       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 10:32:17.716530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.723497       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 10:32:17.723545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.724727       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1216 10:32:17.724816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.734485       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 10:32:17.734538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.784991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 10:32:17.785065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.800512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 10:32:17.800572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:17.850379       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 10:32:17.850713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:18.046883       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 10:32:18.046987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:18.057411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 10:32:18.057547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 10:32:18.062149       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 10:32:18.062650       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1216 10:32:18.081749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 10:32:18.082369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1216 10:32:20.882828       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 10:39:19 addons-020871 kubelet[1211]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 16 10:39:19 addons-020871 kubelet[1211]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 16 10:39:19 addons-020871 kubelet[1211]: E1216 10:39:19.609977    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345559609636619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:19 addons-020871 kubelet[1211]: E1216 10:39:19.610179    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345559609636619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:29 addons-020871 kubelet[1211]: E1216 10:39:29.612666    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345569612326290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:29 addons-020871 kubelet[1211]: E1216 10:39:29.612734    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345569612326290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:39 addons-020871 kubelet[1211]: E1216 10:39:39.615018    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345579614631900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:39 addons-020871 kubelet[1211]: E1216 10:39:39.615060    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345579614631900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:49 addons-020871 kubelet[1211]: E1216 10:39:49.618130    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345589617803985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:49 addons-020871 kubelet[1211]: E1216 10:39:49.618167    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345589617803985,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:59 addons-020871 kubelet[1211]: E1216 10:39:59.620945    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345599620300912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:39:59 addons-020871 kubelet[1211]: E1216 10:39:59.620985    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345599620300912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:09 addons-020871 kubelet[1211]: E1216 10:40:09.623937    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345609623593474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:09 addons-020871 kubelet[1211]: E1216 10:40:09.623992    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345609623593474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:10 addons-020871 kubelet[1211]: I1216 10:40:10.391065    1211 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-5mpr5" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 10:40:13 addons-020871 kubelet[1211]: I1216 10:40:13.391288    1211 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 10:40:19 addons-020871 kubelet[1211]: E1216 10:40:19.416614    1211 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 16 10:40:19 addons-020871 kubelet[1211]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 16 10:40:19 addons-020871 kubelet[1211]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 16 10:40:19 addons-020871 kubelet[1211]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 16 10:40:19 addons-020871 kubelet[1211]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 16 10:40:19 addons-020871 kubelet[1211]: E1216 10:40:19.626260    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345619625950103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:19 addons-020871 kubelet[1211]: E1216 10:40:19.626284    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345619625950103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:29 addons-020871 kubelet[1211]: E1216 10:40:29.628234    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345629627955685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 10:40:29 addons-020871 kubelet[1211]: E1216 10:40:29.628275    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734345629627955685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [f13387074acfc8da98881a56f788a74f50b35d7b15a8a577aadf6fbd2ec6f5bc] <==
	I1216 10:32:32.299444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 10:32:32.580657       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 10:32:32.580738       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 10:32:32.892192       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 10:32:32.895888       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-020871_ee801e69-63da-4df8-86b6-62118e25b60b!
	I1216 10:32:32.897592       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c21f86b-177f-4f45-83a9-a66c7cc6f27e", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-020871_ee801e69-63da-4df8-86b6-62118e25b60b became leader
	I1216 10:32:32.996508       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-020871_ee801e69-63da-4df8-86b6-62118e25b60b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-020871 -n addons-020871
helpers_test.go:261: (dbg) Run:  kubectl --context addons-020871 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (361.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.030240063s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 image ls: (2.275842453s)
functional_test.go:446: expected "kicbase/echo-server:functional-365716" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (7.31s)

                                                
                                    
x
+
TestPreload (163.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-741456 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1216 11:33:31.592110  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:33:48.521908  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-741456 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m30.171785427s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-741456 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-741456 image pull gcr.io/k8s-minikube/busybox: (2.342525964s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-741456
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-741456: (7.29689619s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-741456 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-741456 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.959486183s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-741456 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-12-16 11:36:03.217337801 +0000 UTC m=+3885.311288063
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-741456 -n test-preload-741456
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-741456 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-851147 ssh -n                                                                 | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	|         | multinode-851147-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-851147 ssh -n multinode-851147 sudo cat                                       | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	|         | /home/docker/cp-test_multinode-851147-m03_multinode-851147.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-851147 cp multinode-851147-m03:/home/docker/cp-test.txt                       | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	|         | multinode-851147-m02:/home/docker/cp-test_multinode-851147-m03_multinode-851147-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-851147 ssh -n                                                                 | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	|         | multinode-851147-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-851147 ssh -n multinode-851147-m02 sudo cat                                   | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	|         | /home/docker/cp-test_multinode-851147-m03_multinode-851147-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-851147 node stop m03                                                          | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	| node    | multinode-851147 node start                                                             | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-851147                                                                | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC |                     |
	| stop    | -p multinode-851147                                                                     | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:24 UTC |
	| start   | -p multinode-851147                                                                     | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:24 UTC | 16 Dec 24 11:27 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-851147                                                                | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:27 UTC |                     |
	| node    | multinode-851147 node delete                                                            | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:27 UTC | 16 Dec 24 11:27 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-851147 stop                                                                   | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:27 UTC | 16 Dec 24 11:30 UTC |
	| start   | -p multinode-851147                                                                     | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:30 UTC | 16 Dec 24 11:32 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-851147                                                                | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:32 UTC |                     |
	| start   | -p multinode-851147-m02                                                                 | multinode-851147-m02 | jenkins | v1.34.0 | 16 Dec 24 11:32 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-851147-m03                                                                 | multinode-851147-m03 | jenkins | v1.34.0 | 16 Dec 24 11:32 UTC | 16 Dec 24 11:33 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-851147                                                                 | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:33 UTC |                     |
	| delete  | -p multinode-851147-m03                                                                 | multinode-851147-m03 | jenkins | v1.34.0 | 16 Dec 24 11:33 UTC | 16 Dec 24 11:33 UTC |
	| delete  | -p multinode-851147                                                                     | multinode-851147     | jenkins | v1.34.0 | 16 Dec 24 11:33 UTC | 16 Dec 24 11:33 UTC |
	| start   | -p test-preload-741456                                                                  | test-preload-741456  | jenkins | v1.34.0 | 16 Dec 24 11:33 UTC | 16 Dec 24 11:34 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-741456 image pull                                                          | test-preload-741456  | jenkins | v1.34.0 | 16 Dec 24 11:34 UTC | 16 Dec 24 11:34 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-741456                                                                  | test-preload-741456  | jenkins | v1.34.0 | 16 Dec 24 11:34 UTC | 16 Dec 24 11:35 UTC |
	| start   | -p test-preload-741456                                                                  | test-preload-741456  | jenkins | v1.34.0 | 16 Dec 24 11:35 UTC | 16 Dec 24 11:36 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-741456 image list                                                          | test-preload-741456  | jenkins | v1.34.0 | 16 Dec 24 11:36 UTC | 16 Dec 24 11:36 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:35:02
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:35:02.083966  250756 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:35:02.084097  250756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:35:02.084106  250756 out.go:358] Setting ErrFile to fd 2...
	I1216 11:35:02.084113  250756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:35:02.084329  250756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:35:02.084926  250756 out.go:352] Setting JSON to false
	I1216 11:35:02.085893  250756 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11849,"bootTime":1734337053,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 11:35:02.085967  250756 start.go:139] virtualization: kvm guest
	I1216 11:35:02.088088  250756 out.go:177] * [test-preload-741456] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 11:35:02.089548  250756 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:35:02.089617  250756 notify.go:220] Checking for updates...
	I1216 11:35:02.092394  250756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:35:02.093508  250756 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:35:02.094678  250756 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:35:02.095801  250756 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 11:35:02.096934  250756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:35:02.098452  250756 config.go:182] Loaded profile config "test-preload-741456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1216 11:35:02.098835  250756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:35:02.098903  250756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:35:02.113995  250756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37803
	I1216 11:35:02.114506  250756 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:35:02.115065  250756 main.go:141] libmachine: Using API Version  1
	I1216 11:35:02.115088  250756 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:35:02.115399  250756 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:35:02.115582  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	I1216 11:35:02.117106  250756 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1216 11:35:02.118116  250756 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:35:02.118450  250756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:35:02.118495  250756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:35:02.133245  250756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41281
	I1216 11:35:02.133740  250756 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:35:02.134204  250756 main.go:141] libmachine: Using API Version  1
	I1216 11:35:02.134224  250756 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:35:02.134512  250756 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:35:02.134727  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	I1216 11:35:02.170347  250756 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 11:35:02.171556  250756 start.go:297] selected driver: kvm2
	I1216 11:35:02.171576  250756 start.go:901] validating driver "kvm2" against &{Name:test-preload-741456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-741456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:35:02.171745  250756 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:35:02.173088  250756 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:35:02.173230  250756 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 11:35:02.188345  250756 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 11:35:02.188712  250756 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 11:35:02.188742  250756 cni.go:84] Creating CNI manager for ""
	I1216 11:35:02.188772  250756 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:35:02.188822  250756 start.go:340] cluster config:
	{Name:test-preload-741456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-741456 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:35:02.188922  250756 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:35:02.190662  250756 out.go:177] * Starting "test-preload-741456" primary control-plane node in "test-preload-741456" cluster
	I1216 11:35:02.191809  250756 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1216 11:35:02.213308  250756 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1216 11:35:02.213341  250756 cache.go:56] Caching tarball of preloaded images
	I1216 11:35:02.213498  250756 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1216 11:35:02.215170  250756 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1216 11:35:02.216443  250756 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1216 11:35:02.241911  250756 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1216 11:35:07.157050  250756 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1216 11:35:07.157152  250756 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1216 11:35:08.012709  250756 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1216 11:35:08.012851  250756 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/config.json ...
	I1216 11:35:08.013127  250756 start.go:360] acquireMachinesLock for test-preload-741456: {Name:mk416695f38f04563a2ca155c5383ee4d8f0f97b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 11:35:08.013199  250756 start.go:364] duration metric: took 45.222µs to acquireMachinesLock for "test-preload-741456"
	I1216 11:35:08.013215  250756 start.go:96] Skipping create...Using existing machine configuration
	I1216 11:35:08.013220  250756 fix.go:54] fixHost starting: 
	I1216 11:35:08.013492  250756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:35:08.013528  250756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:35:08.028252  250756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I1216 11:35:08.028794  250756 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:35:08.029364  250756 main.go:141] libmachine: Using API Version  1
	I1216 11:35:08.029390  250756 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:35:08.029729  250756 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:35:08.029927  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	I1216 11:35:08.030070  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetState
	I1216 11:35:08.031839  250756 fix.go:112] recreateIfNeeded on test-preload-741456: state=Stopped err=<nil>
	I1216 11:35:08.031861  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	W1216 11:35:08.032005  250756 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 11:35:08.034024  250756 out.go:177] * Restarting existing kvm2 VM for "test-preload-741456" ...
	I1216 11:35:08.035181  250756 main.go:141] libmachine: (test-preload-741456) Calling .Start
	I1216 11:35:08.035335  250756 main.go:141] libmachine: (test-preload-741456) starting domain...
	I1216 11:35:08.035359  250756 main.go:141] libmachine: (test-preload-741456) ensuring networks are active...
	I1216 11:35:08.036062  250756 main.go:141] libmachine: (test-preload-741456) Ensuring network default is active
	I1216 11:35:08.036374  250756 main.go:141] libmachine: (test-preload-741456) Ensuring network mk-test-preload-741456 is active
	I1216 11:35:08.036752  250756 main.go:141] libmachine: (test-preload-741456) getting domain XML...
	I1216 11:35:08.037459  250756 main.go:141] libmachine: (test-preload-741456) creating domain...
	I1216 11:35:09.232282  250756 main.go:141] libmachine: (test-preload-741456) waiting for IP...
	I1216 11:35:09.233184  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:09.233586  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:09.233701  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:09.233598  250824 retry.go:31] will retry after 233.599185ms: waiting for domain to come up
	I1216 11:35:09.469136  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:09.469679  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:09.469713  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:09.469630  250824 retry.go:31] will retry after 266.800199ms: waiting for domain to come up
	I1216 11:35:09.738304  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:09.738756  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:09.738780  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:09.738722  250824 retry.go:31] will retry after 411.066678ms: waiting for domain to come up
	I1216 11:35:10.151255  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:10.151744  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:10.151772  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:10.151724  250824 retry.go:31] will retry after 485.212564ms: waiting for domain to come up
	I1216 11:35:10.638311  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:10.638766  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:10.638788  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:10.638749  250824 retry.go:31] will retry after 490.009973ms: waiting for domain to come up
	I1216 11:35:11.130496  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:11.130799  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:11.130825  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:11.130761  250824 retry.go:31] will retry after 742.98373ms: waiting for domain to come up
	I1216 11:35:11.875871  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:11.876235  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:11.876254  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:11.876212  250824 retry.go:31] will retry after 769.033815ms: waiting for domain to come up
	I1216 11:35:12.647435  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:12.647902  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:12.647926  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:12.647872  250824 retry.go:31] will retry after 1.392828857s: waiting for domain to come up
	I1216 11:35:14.041928  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:14.042344  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:14.042372  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:14.042333  250824 retry.go:31] will retry after 1.580160762s: waiting for domain to come up
	I1216 11:35:15.625194  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:15.625650  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:15.625673  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:15.625587  250824 retry.go:31] will retry after 2.153306889s: waiting for domain to come up
	I1216 11:35:17.782074  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:17.782583  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:17.782613  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:17.782537  250824 retry.go:31] will retry after 2.420375s: waiting for domain to come up
	I1216 11:35:20.204745  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:20.205186  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:20.205210  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:20.205152  250824 retry.go:31] will retry after 2.18849586s: waiting for domain to come up
	I1216 11:35:22.395247  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:22.395706  250756 main.go:141] libmachine: (test-preload-741456) DBG | unable to find current IP address of domain test-preload-741456 in network mk-test-preload-741456
	I1216 11:35:22.395733  250756 main.go:141] libmachine: (test-preload-741456) DBG | I1216 11:35:22.395654  250824 retry.go:31] will retry after 2.811470459s: waiting for domain to come up
	I1216 11:35:25.210165  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.210642  250756 main.go:141] libmachine: (test-preload-741456) found domain IP: 192.168.39.107
	I1216 11:35:25.210662  250756 main.go:141] libmachine: (test-preload-741456) reserving static IP address...
	I1216 11:35:25.210681  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has current primary IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.211097  250756 main.go:141] libmachine: (test-preload-741456) reserved static IP address 192.168.39.107 for domain test-preload-741456
	I1216 11:35:25.211118  250756 main.go:141] libmachine: (test-preload-741456) waiting for SSH...
	I1216 11:35:25.211142  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "test-preload-741456", mac: "52:54:00:e1:a7:c5", ip: "192.168.39.107"} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:25.211168  250756 main.go:141] libmachine: (test-preload-741456) DBG | skip adding static IP to network mk-test-preload-741456 - found existing host DHCP lease matching {name: "test-preload-741456", mac: "52:54:00:e1:a7:c5", ip: "192.168.39.107"}
	I1216 11:35:25.211189  250756 main.go:141] libmachine: (test-preload-741456) DBG | Getting to WaitForSSH function...
	I1216 11:35:25.213657  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.213958  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:25.214002  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.214084  250756 main.go:141] libmachine: (test-preload-741456) DBG | Using SSH client type: external
	I1216 11:35:25.214110  250756 main.go:141] libmachine: (test-preload-741456) DBG | Using SSH private key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/test-preload-741456/id_rsa (-rw-------)
	I1216 11:35:25.214142  250756 main.go:141] libmachine: (test-preload-741456) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20107-210204/.minikube/machines/test-preload-741456/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 11:35:25.214152  250756 main.go:141] libmachine: (test-preload-741456) DBG | About to run SSH command:
	I1216 11:35:25.214161  250756 main.go:141] libmachine: (test-preload-741456) DBG | exit 0
	I1216 11:35:25.337340  250756 main.go:141] libmachine: (test-preload-741456) DBG | SSH cmd err, output: <nil>: 
	I1216 11:35:25.337755  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetConfigRaw
	I1216 11:35:25.338409  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetIP
	I1216 11:35:25.340997  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.341484  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:25.341513  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.341786  250756 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/config.json ...
	I1216 11:35:25.342051  250756 machine.go:93] provisionDockerMachine start ...
	I1216 11:35:25.342081  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	I1216 11:35:25.342321  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:25.345220  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.345615  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:25.345642  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.345883  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHPort
	I1216 11:35:25.346088  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:25.346272  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:25.346439  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHUsername
	I1216 11:35:25.346607  250756 main.go:141] libmachine: Using SSH client type: native
	I1216 11:35:25.346854  250756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1216 11:35:25.346869  250756 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 11:35:25.449311  250756 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 11:35:25.449340  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetMachineName
	I1216 11:35:25.449635  250756 buildroot.go:166] provisioning hostname "test-preload-741456"
	I1216 11:35:25.449663  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetMachineName
	I1216 11:35:25.449923  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:25.452313  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.452704  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:25.452737  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.452850  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHPort
	I1216 11:35:25.453041  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:25.453186  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:25.453315  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHUsername
	I1216 11:35:25.453449  250756 main.go:141] libmachine: Using SSH client type: native
	I1216 11:35:25.453654  250756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1216 11:35:25.453670  250756 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-741456 && echo "test-preload-741456" | sudo tee /etc/hostname
	I1216 11:35:25.578155  250756 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-741456
	
	I1216 11:35:25.578188  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:25.580930  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.581262  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:25.581300  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.581454  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHPort
	I1216 11:35:25.581662  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:25.581866  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:25.582043  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHUsername
	I1216 11:35:25.582228  250756 main.go:141] libmachine: Using SSH client type: native
	I1216 11:35:25.582432  250756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1216 11:35:25.582459  250756 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-741456' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-741456/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-741456' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 11:35:25.693493  250756 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:35:25.693537  250756 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20107-210204/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-210204/.minikube}
	I1216 11:35:25.693566  250756 buildroot.go:174] setting up certificates
	I1216 11:35:25.693581  250756 provision.go:84] configureAuth start
	I1216 11:35:25.693597  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetMachineName
	I1216 11:35:25.693906  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetIP
	I1216 11:35:25.696406  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.696795  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:25.696836  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.696976  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:25.699079  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.699396  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:25.699432  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.699620  250756 provision.go:143] copyHostCerts
	I1216 11:35:25.699698  250756 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem, removing ...
	I1216 11:35:25.699714  250756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem
	I1216 11:35:25.699779  250756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem (1078 bytes)
	I1216 11:35:25.699877  250756 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem, removing ...
	I1216 11:35:25.699885  250756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem
	I1216 11:35:25.699911  250756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem (1123 bytes)
	I1216 11:35:25.699966  250756 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem, removing ...
	I1216 11:35:25.699973  250756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem
	I1216 11:35:25.699993  250756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem (1679 bytes)
	I1216 11:35:25.700042  250756 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem org=jenkins.test-preload-741456 san=[127.0.0.1 192.168.39.107 localhost minikube test-preload-741456]
	I1216 11:35:25.828814  250756 provision.go:177] copyRemoteCerts
	I1216 11:35:25.828874  250756 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 11:35:25.828903  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:25.831820  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.832132  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:25.832160  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:25.832367  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHPort
	I1216 11:35:25.832582  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:25.832744  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHUsername
	I1216 11:35:25.832940  250756 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/test-preload-741456/id_rsa Username:docker}
	I1216 11:35:25.915241  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 11:35:25.941702  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 11:35:25.968107  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 11:35:26.005009  250756 provision.go:87] duration metric: took 311.414136ms to configureAuth
	I1216 11:35:26.005051  250756 buildroot.go:189] setting minikube options for container-runtime
	I1216 11:35:26.005238  250756 config.go:182] Loaded profile config "test-preload-741456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1216 11:35:26.005329  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:26.007929  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.008290  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:26.008322  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.008461  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHPort
	I1216 11:35:26.008666  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:26.008848  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:26.008999  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHUsername
	I1216 11:35:26.009176  250756 main.go:141] libmachine: Using SSH client type: native
	I1216 11:35:26.009345  250756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1216 11:35:26.009359  250756 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 11:35:26.217773  250756 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 11:35:26.217806  250756 machine.go:96] duration metric: took 875.734527ms to provisionDockerMachine
	I1216 11:35:26.217822  250756 start.go:293] postStartSetup for "test-preload-741456" (driver="kvm2")
	I1216 11:35:26.217839  250756 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 11:35:26.217866  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	I1216 11:35:26.218202  250756 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 11:35:26.218243  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:26.220715  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.221105  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:26.221144  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.221302  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHPort
	I1216 11:35:26.221503  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:26.221680  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHUsername
	I1216 11:35:26.221814  250756 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/test-preload-741456/id_rsa Username:docker}
	I1216 11:35:26.303588  250756 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 11:35:26.307467  250756 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 11:35:26.307496  250756 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/addons for local assets ...
	I1216 11:35:26.307584  250756 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/files for local assets ...
	I1216 11:35:26.307683  250756 filesync.go:149] local asset: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem -> 2175192.pem in /etc/ssl/certs
	I1216 11:35:26.307801  250756 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 11:35:26.317033  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:35:26.339659  250756 start.go:296] duration metric: took 121.817875ms for postStartSetup
	I1216 11:35:26.339706  250756 fix.go:56] duration metric: took 18.32648432s for fixHost
	I1216 11:35:26.339735  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:26.342484  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.342830  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:26.342859  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.342989  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHPort
	I1216 11:35:26.343223  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:26.343387  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:26.343551  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHUsername
	I1216 11:35:26.343757  250756 main.go:141] libmachine: Using SSH client type: native
	I1216 11:35:26.343939  250756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I1216 11:35:26.343952  250756 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 11:35:26.445811  250756 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734348926.420274751
	
	I1216 11:35:26.445840  250756 fix.go:216] guest clock: 1734348926.420274751
	I1216 11:35:26.445852  250756 fix.go:229] Guest: 2024-12-16 11:35:26.420274751 +0000 UTC Remote: 2024-12-16 11:35:26.339711933 +0000 UTC m=+24.295082742 (delta=80.562818ms)
	I1216 11:35:26.445882  250756 fix.go:200] guest clock delta is within tolerance: 80.562818ms
	I1216 11:35:26.445891  250756 start.go:83] releasing machines lock for "test-preload-741456", held for 18.432679453s
	I1216 11:35:26.445928  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	I1216 11:35:26.446179  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetIP
	I1216 11:35:26.449196  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.449621  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:26.449644  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.449807  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	I1216 11:35:26.450293  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	I1216 11:35:26.450497  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	I1216 11:35:26.450591  250756 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 11:35:26.450634  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:26.450709  250756 ssh_runner.go:195] Run: cat /version.json
	I1216 11:35:26.450730  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:26.453464  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.453673  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.453922  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:26.453953  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.454063  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:26.454102  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHPort
	I1216 11:35:26.454103  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:26.454285  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:26.454302  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHPort
	I1216 11:35:26.454468  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHUsername
	I1216 11:35:26.454646  250756 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/test-preload-741456/id_rsa Username:docker}
	I1216 11:35:26.454727  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:26.454891  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHUsername
	I1216 11:35:26.455058  250756 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/test-preload-741456/id_rsa Username:docker}
	I1216 11:35:26.559069  250756 ssh_runner.go:195] Run: systemctl --version
	I1216 11:35:26.564791  250756 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 11:35:26.705544  250756 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 11:35:26.711031  250756 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 11:35:26.711114  250756 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:35:26.726844  250756 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 11:35:26.726876  250756 start.go:495] detecting cgroup driver to use...
	I1216 11:35:26.726966  250756 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 11:35:26.743222  250756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 11:35:26.756936  250756 docker.go:217] disabling cri-docker service (if available) ...
	I1216 11:35:26.757014  250756 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 11:35:26.770250  250756 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 11:35:26.783533  250756 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 11:35:26.899145  250756 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 11:35:27.043533  250756 docker.go:233] disabling docker service ...
	I1216 11:35:27.043621  250756 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 11:35:27.057688  250756 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 11:35:27.070507  250756 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 11:35:27.214482  250756 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 11:35:27.315542  250756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 11:35:27.329161  250756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 11:35:27.346012  250756 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1216 11:35:27.346090  250756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:35:27.355580  250756 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 11:35:27.355656  250756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:35:27.365423  250756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:35:27.374784  250756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:35:27.384162  250756 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 11:35:27.393859  250756 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:35:27.403855  250756 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:35:27.419852  250756 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:35:27.429291  250756 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 11:35:27.437791  250756 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 11:35:27.437844  250756 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 11:35:27.449309  250756 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 11:35:27.458092  250756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:35:27.563520  250756 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 11:35:27.648069  250756 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 11:35:27.648155  250756 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 11:35:27.653037  250756 start.go:563] Will wait 60s for crictl version
	I1216 11:35:27.653098  250756 ssh_runner.go:195] Run: which crictl
	I1216 11:35:27.656540  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 11:35:27.691587  250756 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 11:35:27.691700  250756 ssh_runner.go:195] Run: crio --version
	I1216 11:35:27.718623  250756 ssh_runner.go:195] Run: crio --version
	I1216 11:35:27.745774  250756 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1216 11:35:27.747081  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetIP
	I1216 11:35:27.750014  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:27.750423  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:27.750473  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:27.750714  250756 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 11:35:27.754695  250756 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:35:27.766669  250756 kubeadm.go:883] updating cluster {Name:test-preload-741456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-741456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 11:35:27.766778  250756 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1216 11:35:27.766821  250756 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:35:27.810373  250756 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1216 11:35:27.810460  250756 ssh_runner.go:195] Run: which lz4
	I1216 11:35:27.814310  250756 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 11:35:27.818153  250756 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 11:35:27.818189  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1216 11:35:29.240023  250756 crio.go:462] duration metric: took 1.425748429s to copy over tarball
	I1216 11:35:29.240108  250756 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 11:35:31.556343  250756 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.316194968s)
	I1216 11:35:31.556397  250756 crio.go:469] duration metric: took 2.316340605s to extract the tarball
	I1216 11:35:31.556409  250756 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 11:35:31.596823  250756 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:35:31.635691  250756 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1216 11:35:31.635720  250756 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 11:35:31.635797  250756 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:35:31.635817  250756 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1216 11:35:31.635839  250756 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1216 11:35:31.635866  250756 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1216 11:35:31.635903  250756 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 11:35:31.635923  250756 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1216 11:35:31.635872  250756 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 11:35:31.635813  250756 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1216 11:35:31.637478  250756 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 11:35:31.637491  250756 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1216 11:35:31.637506  250756 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1216 11:35:31.637512  250756 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1216 11:35:31.637519  250756 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:35:31.637478  250756 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1216 11:35:31.637538  250756 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 11:35:31.637545  250756 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1216 11:35:31.787626  250756 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1216 11:35:31.788637  250756 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1216 11:35:31.788917  250756 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1216 11:35:31.794681  250756 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1216 11:35:31.801490  250756 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1216 11:35:31.820456  250756 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1216 11:35:31.873392  250756 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1216 11:35:31.873435  250756 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1216 11:35:31.873475  250756 ssh_runner.go:195] Run: which crictl
	I1216 11:35:31.915951  250756 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1216 11:35:31.916003  250756 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1216 11:35:31.916014  250756 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1216 11:35:31.916047  250756 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1216 11:35:31.916055  250756 ssh_runner.go:195] Run: which crictl
	I1216 11:35:31.916091  250756 ssh_runner.go:195] Run: which crictl
	I1216 11:35:31.916096  250756 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1216 11:35:31.916124  250756 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1216 11:35:31.916162  250756 ssh_runner.go:195] Run: which crictl
	I1216 11:35:31.917510  250756 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 11:35:31.947826  250756 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1216 11:35:31.947868  250756 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1216 11:35:31.947912  250756 ssh_runner.go:195] Run: which crictl
	I1216 11:35:31.947907  250756 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1216 11:35:31.947945  250756 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1216 11:35:31.947953  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1216 11:35:31.947977  250756 ssh_runner.go:195] Run: which crictl
	I1216 11:35:31.948097  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1216 11:35:31.948112  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1216 11:35:31.948132  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1216 11:35:32.044320  250756 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1216 11:35:32.044369  250756 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 11:35:32.044426  250756 ssh_runner.go:195] Run: which crictl
	I1216 11:35:32.044444  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1216 11:35:32.065634  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1216 11:35:32.065674  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1216 11:35:32.065790  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1216 11:35:32.065814  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1216 11:35:32.065877  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1216 11:35:32.171214  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1216 11:35:32.171214  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 11:35:32.191089  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1216 11:35:32.191153  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1216 11:35:32.191100  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1216 11:35:32.191206  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1216 11:35:32.198443  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1216 11:35:32.302098  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1216 11:35:32.302138  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 11:35:32.341566  250756 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1216 11:35:32.341631  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1216 11:35:32.341652  250756 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1216 11:35:32.341693  250756 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1216 11:35:32.341733  250756 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1216 11:35:32.341753  250756 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1216 11:35:32.341804  250756 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1216 11:35:32.341826  250756 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1216 11:35:32.341862  250756 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1216 11:35:32.410265  250756 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1216 11:35:32.410411  250756 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1216 11:35:32.411737  250756 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1216 11:35:32.411789  250756 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1216 11:35:32.411820  250756 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1216 11:35:32.411837  250756 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1216 11:35:32.411737  250756 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1216 11:35:32.411862  250756 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1216 11:35:32.411876  250756 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1216 11:35:32.411877  250756 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1216 11:35:32.411900  250756 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1216 11:35:32.415471  250756 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1216 11:35:32.459245  250756 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1216 11:35:32.459355  250756 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1216 11:35:32.552409  250756 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:35:35.178302  250756 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.766395702s)
	I1216 11:35:35.178354  250756 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1216 11:35:35.178358  250756 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4: (2.766457099s)
	I1216 11:35:35.178381  250756 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1216 11:35:35.178390  250756 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1216 11:35:35.178402  250756 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.719026973s)
	I1216 11:35:35.178425  250756 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1216 11:35:35.178435  250756 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1216 11:35:35.178435  250756 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.625995708s)
	I1216 11:35:35.922641  250756 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1216 11:35:35.922702  250756 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1216 11:35:35.922773  250756 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1216 11:35:36.268226  250756 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1216 11:35:36.268270  250756 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1216 11:35:36.268316  250756 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1216 11:35:36.707006  250756 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1216 11:35:36.707056  250756 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1216 11:35:36.707115  250756 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1216 11:35:38.750986  250756 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.043845225s)
	I1216 11:35:38.751028  250756 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1216 11:35:38.751057  250756 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1216 11:35:38.751113  250756 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1216 11:35:39.604551  250756 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1216 11:35:39.604609  250756 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1216 11:35:39.604662  250756 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1216 11:35:40.249894  250756 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1216 11:35:40.249950  250756 cache_images.go:123] Successfully loaded all cached images
	I1216 11:35:40.249958  250756 cache_images.go:92] duration metric: took 8.614223053s to LoadCachedImages
	I1216 11:35:40.249976  250756 kubeadm.go:934] updating node { 192.168.39.107 8443 v1.24.4 crio true true} ...
	I1216 11:35:40.250084  250756 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-741456 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-741456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 11:35:40.250164  250756 ssh_runner.go:195] Run: crio config
	I1216 11:35:40.300068  250756 cni.go:84] Creating CNI manager for ""
	I1216 11:35:40.300092  250756 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:35:40.300103  250756 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 11:35:40.300123  250756 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-741456 NodeName:test-preload-741456 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 11:35:40.300249  250756 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-741456"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 11:35:40.300308  250756 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1216 11:35:40.310126  250756 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 11:35:40.310196  250756 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 11:35:40.319151  250756 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1216 11:35:40.334587  250756 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 11:35:40.349993  250756 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1216 11:35:40.366599  250756 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I1216 11:35:40.370247  250756 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:35:40.381914  250756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:35:40.499776  250756 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:35:40.518175  250756 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456 for IP: 192.168.39.107
	I1216 11:35:40.518205  250756 certs.go:194] generating shared ca certs ...
	I1216 11:35:40.518228  250756 certs.go:226] acquiring lock for ca certs: {Name:mkd97e0b0fea726104cb674caf81491f1144961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:35:40.518436  250756 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key
	I1216 11:35:40.518494  250756 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key
	I1216 11:35:40.518507  250756 certs.go:256] generating profile certs ...
	I1216 11:35:40.518657  250756 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/client.key
	I1216 11:35:40.518744  250756 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/apiserver.key.6f480436
	I1216 11:35:40.518791  250756 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/proxy-client.key
	I1216 11:35:40.519005  250756 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem (1338 bytes)
	W1216 11:35:40.519066  250756 certs.go:480] ignoring /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519_empty.pem, impossibly tiny 0 bytes
	I1216 11:35:40.519084  250756 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 11:35:40.519122  250756 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem (1078 bytes)
	I1216 11:35:40.519162  250756 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem (1123 bytes)
	I1216 11:35:40.519199  250756 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem (1679 bytes)
	I1216 11:35:40.519261  250756 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:35:40.520187  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 11:35:40.564180  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 11:35:40.611925  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 11:35:40.643216  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 11:35:40.670582  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 11:35:40.704141  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 11:35:40.726168  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 11:35:40.748022  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 11:35:40.770225  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 11:35:40.792042  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem --> /usr/share/ca-certificates/217519.pem (1338 bytes)
	I1216 11:35:40.813931  250756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /usr/share/ca-certificates/2175192.pem (1708 bytes)
	I1216 11:35:40.835854  250756 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 11:35:40.851795  250756 ssh_runner.go:195] Run: openssl version
	I1216 11:35:40.857095  250756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 11:35:40.867513  250756 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:35:40.871640  250756 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:35:40.871716  250756 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:35:40.877035  250756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 11:35:40.887432  250756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/217519.pem && ln -fs /usr/share/ca-certificates/217519.pem /etc/ssl/certs/217519.pem"
	I1216 11:35:40.899365  250756 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/217519.pem
	I1216 11:35:40.903887  250756 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 10:42 /usr/share/ca-certificates/217519.pem
	I1216 11:35:40.903948  250756 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/217519.pem
	I1216 11:35:40.909433  250756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/217519.pem /etc/ssl/certs/51391683.0"
	I1216 11:35:40.920817  250756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2175192.pem && ln -fs /usr/share/ca-certificates/2175192.pem /etc/ssl/certs/2175192.pem"
	I1216 11:35:40.932332  250756 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2175192.pem
	I1216 11:35:40.936569  250756 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 10:42 /usr/share/ca-certificates/2175192.pem
	I1216 11:35:40.936616  250756 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2175192.pem
	I1216 11:35:40.942120  250756 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2175192.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 11:35:40.953860  250756 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 11:35:40.958368  250756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 11:35:40.964083  250756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 11:35:40.969866  250756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 11:35:40.975810  250756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 11:35:40.981494  250756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 11:35:40.986989  250756 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 11:35:40.992618  250756 kubeadm.go:392] StartCluster: {Name:test-preload-741456 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-741456 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:35:40.992718  250756 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 11:35:40.992776  250756 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:35:41.033734  250756 cri.go:89] found id: ""
	I1216 11:35:41.033834  250756 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 11:35:41.043762  250756 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 11:35:41.043789  250756 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 11:35:41.043854  250756 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 11:35:41.053096  250756 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 11:35:41.053514  250756 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-741456" does not appear in /home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:35:41.053626  250756 kubeconfig.go:62] /home/jenkins/minikube-integration/20107-210204/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-741456" cluster setting kubeconfig missing "test-preload-741456" context setting]
	I1216 11:35:41.053865  250756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/kubeconfig: {Name:mk57b3cf865454cf7d68479ab0b998d52a5c91e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:35:41.054439  250756 kapi.go:59] client config for test-preload-741456: &rest.Config{Host:"https://192.168.39.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/client.crt", KeyFile:"/home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/client.key", CAFile:"/home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x244c9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 11:35:41.055116  250756 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 11:35:41.064337  250756 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.107
	I1216 11:35:41.064376  250756 kubeadm.go:1160] stopping kube-system containers ...
	I1216 11:35:41.064392  250756 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 11:35:41.064449  250756 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:35:41.097324  250756 cri.go:89] found id: ""
	I1216 11:35:41.097412  250756 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 11:35:41.113151  250756 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:35:41.122335  250756 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:35:41.122354  250756 kubeadm.go:157] found existing configuration files:
	
	I1216 11:35:41.122412  250756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:35:41.131068  250756 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:35:41.131141  250756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:35:41.139961  250756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:35:41.148545  250756 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:35:41.148614  250756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:35:41.157335  250756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:35:41.165830  250756 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:35:41.165894  250756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:35:41.174849  250756 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:35:41.183399  250756 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:35:41.183456  250756 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:35:41.192451  250756 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:35:41.201682  250756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:35:41.290763  250756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:35:41.903786  250756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:35:42.198786  250756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:35:42.257857  250756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:35:42.330952  250756 api_server.go:52] waiting for apiserver process to appear ...
	I1216 11:35:42.331061  250756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:35:42.831562  250756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:35:43.331973  250756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:35:43.347540  250756 api_server.go:72] duration metric: took 1.016585765s to wait for apiserver process to appear ...
	I1216 11:35:43.347575  250756 api_server.go:88] waiting for apiserver healthz status ...
	I1216 11:35:43.347605  250756 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1216 11:35:43.348098  250756 api_server.go:269] stopped: https://192.168.39.107:8443/healthz: Get "https://192.168.39.107:8443/healthz": dial tcp 192.168.39.107:8443: connect: connection refused
	I1216 11:35:43.847728  250756 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1216 11:35:47.255614  250756 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 11:35:47.255645  250756 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 11:35:47.255661  250756 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1216 11:35:47.307887  250756 api_server.go:279] https://192.168.39.107:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 11:35:47.307920  250756 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 11:35:47.348140  250756 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1216 11:35:47.356560  250756 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:35:47.356592  250756 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:35:47.848127  250756 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1216 11:35:47.853186  250756 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:35:47.853217  250756 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:35:48.348407  250756 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1216 11:35:48.353686  250756 api_server.go:279] https://192.168.39.107:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:35:48.353716  250756 api_server.go:103] status: https://192.168.39.107:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:35:48.848284  250756 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1216 11:35:48.855025  250756 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I1216 11:35:48.862055  250756 api_server.go:141] control plane version: v1.24.4
	I1216 11:35:48.862081  250756 api_server.go:131] duration metric: took 5.514499464s to wait for apiserver health ...
	I1216 11:35:48.862090  250756 cni.go:84] Creating CNI manager for ""
	I1216 11:35:48.862097  250756 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:35:48.863712  250756 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 11:35:48.864898  250756 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 11:35:48.887511  250756 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 11:35:48.904006  250756 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 11:35:48.904101  250756 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 11:35:48.904122  250756 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 11:35:48.915595  250756 system_pods.go:59] 7 kube-system pods found
	I1216 11:35:48.915635  250756 system_pods.go:61] "coredns-6d4b75cb6d-2ckjf" [45c0b538-c116-4984-91af-480902037a54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 11:35:48.915641  250756 system_pods.go:61] "etcd-test-preload-741456" [ac3c7b62-c9c8-4c04-9595-0d601daf7307] Running
	I1216 11:35:48.915645  250756 system_pods.go:61] "kube-apiserver-test-preload-741456" [099aa3a2-24c6-4fa5-b934-be85e3a4be47] Running
	I1216 11:35:48.915649  250756 system_pods.go:61] "kube-controller-manager-test-preload-741456" [0ce987b2-8ba5-43bf-94b4-402128ccb9b1] Running
	I1216 11:35:48.915653  250756 system_pods.go:61] "kube-proxy-g4jkk" [1bae6021-c773-4ea2-b2b0-1eb6f6ae25ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 11:35:48.915657  250756 system_pods.go:61] "kube-scheduler-test-preload-741456" [e8894996-1f12-4e80-bd49-25a17d9fac3d] Running
	I1216 11:35:48.915663  250756 system_pods.go:61] "storage-provisioner" [8145521c-8750-4253-83d9-4ecd8e325a44] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 11:35:48.915668  250756 system_pods.go:74] duration metric: took 11.642268ms to wait for pod list to return data ...
	I1216 11:35:48.915675  250756 node_conditions.go:102] verifying NodePressure condition ...
	I1216 11:35:48.924833  250756 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 11:35:48.924860  250756 node_conditions.go:123] node cpu capacity is 2
	I1216 11:35:48.924870  250756 node_conditions.go:105] duration metric: took 9.187852ms to run NodePressure ...
	I1216 11:35:48.924894  250756 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:35:49.094868  250756 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1216 11:35:49.098899  250756 retry.go:31] will retry after 280.820421ms: kubelet not initialised
	I1216 11:35:49.385478  250756 retry.go:31] will retry after 234.666849ms: kubelet not initialised
	I1216 11:35:49.626940  250756 kubeadm.go:739] kubelet initialised
	I1216 11:35:49.626965  250756 kubeadm.go:740] duration metric: took 532.072194ms waiting for restarted kubelet to initialise ...
	I1216 11:35:49.626974  250756 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 11:35:49.632715  250756 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-2ckjf" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:49.637108  250756 pod_ready.go:98] node "test-preload-741456" hosting pod "coredns-6d4b75cb6d-2ckjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:49.637134  250756 pod_ready.go:82] duration metric: took 4.394169ms for pod "coredns-6d4b75cb6d-2ckjf" in "kube-system" namespace to be "Ready" ...
	E1216 11:35:49.637146  250756 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-741456" hosting pod "coredns-6d4b75cb6d-2ckjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:49.637163  250756 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:49.641823  250756 pod_ready.go:98] node "test-preload-741456" hosting pod "etcd-test-preload-741456" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:49.641847  250756 pod_ready.go:82] duration metric: took 4.671679ms for pod "etcd-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	E1216 11:35:49.641855  250756 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-741456" hosting pod "etcd-test-preload-741456" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:49.641862  250756 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:49.651032  250756 pod_ready.go:98] node "test-preload-741456" hosting pod "kube-apiserver-test-preload-741456" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:49.651064  250756 pod_ready.go:82] duration metric: took 9.192636ms for pod "kube-apiserver-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	E1216 11:35:49.651076  250756 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-741456" hosting pod "kube-apiserver-test-preload-741456" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:49.651085  250756 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:49.708873  250756 pod_ready.go:98] node "test-preload-741456" hosting pod "kube-controller-manager-test-preload-741456" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:49.708906  250756 pod_ready.go:82] duration metric: took 57.809883ms for pod "kube-controller-manager-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	E1216 11:35:49.708917  250756 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-741456" hosting pod "kube-controller-manager-test-preload-741456" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:49.708932  250756 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-g4jkk" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:50.108067  250756 pod_ready.go:98] node "test-preload-741456" hosting pod "kube-proxy-g4jkk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:50.108098  250756 pod_ready.go:82] duration metric: took 399.155529ms for pod "kube-proxy-g4jkk" in "kube-system" namespace to be "Ready" ...
	E1216 11:35:50.108110  250756 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-741456" hosting pod "kube-proxy-g4jkk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:50.108116  250756 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:50.507408  250756 pod_ready.go:98] node "test-preload-741456" hosting pod "kube-scheduler-test-preload-741456" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:50.507444  250756 pod_ready.go:82] duration metric: took 399.320253ms for pod "kube-scheduler-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	E1216 11:35:50.507458  250756 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-741456" hosting pod "kube-scheduler-test-preload-741456" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:50.507469  250756 pod_ready.go:39] duration metric: took 880.486117ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 11:35:50.507497  250756 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 11:35:50.519777  250756 ops.go:34] apiserver oom_adj: -16
	I1216 11:35:50.519805  250756 kubeadm.go:597] duration metric: took 9.476009373s to restartPrimaryControlPlane
	I1216 11:35:50.519816  250756 kubeadm.go:394] duration metric: took 9.527205521s to StartCluster
	I1216 11:35:50.519835  250756 settings.go:142] acquiring lock: {Name:mk3e7a5ea729d05e36deedcd45fd7829ccafa72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:35:50.519925  250756 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:35:50.520663  250756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/kubeconfig: {Name:mk57b3cf865454cf7d68479ab0b998d52a5c91e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:35:50.520971  250756 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 11:35:50.521105  250756 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 11:35:50.521201  250756 config.go:182] Loaded profile config "test-preload-741456": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1216 11:35:50.521221  250756 addons.go:69] Setting storage-provisioner=true in profile "test-preload-741456"
	I1216 11:35:50.521245  250756 addons.go:234] Setting addon storage-provisioner=true in "test-preload-741456"
	I1216 11:35:50.521251  250756 addons.go:69] Setting default-storageclass=true in profile "test-preload-741456"
	I1216 11:35:50.521280  250756 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-741456"
	W1216 11:35:50.521256  250756 addons.go:243] addon storage-provisioner should already be in state true
	I1216 11:35:50.521375  250756 host.go:66] Checking if "test-preload-741456" exists ...
	I1216 11:35:50.521629  250756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:35:50.521678  250756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:35:50.521759  250756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:35:50.521804  250756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:35:50.522831  250756 out.go:177] * Verifying Kubernetes components...
	I1216 11:35:50.524485  250756 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:35:50.537167  250756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I1216 11:35:50.537659  250756 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:35:50.538218  250756 main.go:141] libmachine: Using API Version  1
	I1216 11:35:50.538251  250756 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:35:50.538635  250756 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:35:50.538877  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetState
	I1216 11:35:50.538927  250756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42467
	I1216 11:35:50.539461  250756 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:35:50.539954  250756 main.go:141] libmachine: Using API Version  1
	I1216 11:35:50.539974  250756 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:35:50.540380  250756 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:35:50.540997  250756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:35:50.541047  250756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:35:50.541273  250756 kapi.go:59] client config for test-preload-741456: &rest.Config{Host:"https://192.168.39.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/client.crt", KeyFile:"/home/jenkins/minikube-integration/20107-210204/.minikube/profiles/test-preload-741456/client.key", CAFile:"/home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x244c9c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 11:35:50.541561  250756 addons.go:234] Setting addon default-storageclass=true in "test-preload-741456"
	W1216 11:35:50.541584  250756 addons.go:243] addon default-storageclass should already be in state true
	I1216 11:35:50.541610  250756 host.go:66] Checking if "test-preload-741456" exists ...
	I1216 11:35:50.541882  250756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:35:50.541928  250756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:35:50.556736  250756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46717
	I1216 11:35:50.557330  250756 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:35:50.557825  250756 main.go:141] libmachine: Using API Version  1
	I1216 11:35:50.557853  250756 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:35:50.558248  250756 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:35:50.558728  250756 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:35:50.558772  250756 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:35:50.561876  250756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38315
	I1216 11:35:50.585715  250756 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:35:50.586328  250756 main.go:141] libmachine: Using API Version  1
	I1216 11:35:50.586359  250756 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:35:50.586716  250756 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:35:50.586929  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetState
	I1216 11:35:50.588643  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	I1216 11:35:50.590776  250756 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:35:50.592258  250756 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:35:50.592284  250756 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 11:35:50.592307  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:50.595273  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:50.595735  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:50.595764  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:50.595962  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHPort
	I1216 11:35:50.596158  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:50.596357  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHUsername
	I1216 11:35:50.596531  250756 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/test-preload-741456/id_rsa Username:docker}
	I1216 11:35:50.601650  250756 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I1216 11:35:50.602088  250756 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:35:50.602602  250756 main.go:141] libmachine: Using API Version  1
	I1216 11:35:50.602646  250756 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:35:50.602998  250756 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:35:50.603184  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetState
	I1216 11:35:50.604867  250756 main.go:141] libmachine: (test-preload-741456) Calling .DriverName
	I1216 11:35:50.605100  250756 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 11:35:50.605116  250756 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 11:35:50.605139  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHHostname
	I1216 11:35:50.607858  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:50.608258  250756 main.go:141] libmachine: (test-preload-741456) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:a7:c5", ip: ""} in network mk-test-preload-741456: {Iface:virbr1 ExpiryTime:2024-12-16 12:35:18 +0000 UTC Type:0 Mac:52:54:00:e1:a7:c5 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:test-preload-741456 Clientid:01:52:54:00:e1:a7:c5}
	I1216 11:35:50.608290  250756 main.go:141] libmachine: (test-preload-741456) DBG | domain test-preload-741456 has defined IP address 192.168.39.107 and MAC address 52:54:00:e1:a7:c5 in network mk-test-preload-741456
	I1216 11:35:50.608474  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHPort
	I1216 11:35:50.608715  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHKeyPath
	I1216 11:35:50.608888  250756 main.go:141] libmachine: (test-preload-741456) Calling .GetSSHUsername
	I1216 11:35:50.609073  250756 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/test-preload-741456/id_rsa Username:docker}
	I1216 11:35:50.694440  250756 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:35:50.712026  250756 node_ready.go:35] waiting up to 6m0s for node "test-preload-741456" to be "Ready" ...
	I1216 11:35:50.802626  250756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:35:50.803608  250756 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 11:35:51.716764  250756 main.go:141] libmachine: Making call to close driver server
	I1216 11:35:51.716793  250756 main.go:141] libmachine: (test-preload-741456) Calling .Close
	I1216 11:35:51.716829  250756 main.go:141] libmachine: Making call to close driver server
	I1216 11:35:51.716851  250756 main.go:141] libmachine: (test-preload-741456) Calling .Close
	I1216 11:35:51.717107  250756 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:35:51.717126  250756 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:35:51.717136  250756 main.go:141] libmachine: Making call to close driver server
	I1216 11:35:51.717143  250756 main.go:141] libmachine: (test-preload-741456) Calling .Close
	I1216 11:35:51.717192  250756 main.go:141] libmachine: (test-preload-741456) DBG | Closing plugin on server side
	I1216 11:35:51.717200  250756 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:35:51.717211  250756 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:35:51.717226  250756 main.go:141] libmachine: Making call to close driver server
	I1216 11:35:51.717236  250756 main.go:141] libmachine: (test-preload-741456) Calling .Close
	I1216 11:35:51.717399  250756 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:35:51.717413  250756 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:35:51.717481  250756 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:35:51.717509  250756 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:35:51.725065  250756 main.go:141] libmachine: Making call to close driver server
	I1216 11:35:51.725090  250756 main.go:141] libmachine: (test-preload-741456) Calling .Close
	I1216 11:35:51.725314  250756 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:35:51.725328  250756 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:35:51.727118  250756 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1216 11:35:51.728369  250756 addons.go:510] duration metric: took 1.207279615s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 11:35:52.717062  250756 node_ready.go:53] node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:55.216265  250756 node_ready.go:53] node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:57.221135  250756 node_ready.go:53] node "test-preload-741456" has status "Ready":"False"
	I1216 11:35:57.715929  250756 node_ready.go:49] node "test-preload-741456" has status "Ready":"True"
	I1216 11:35:57.715956  250756 node_ready.go:38] duration metric: took 7.00388555s for node "test-preload-741456" to be "Ready" ...
	I1216 11:35:57.715971  250756 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 11:35:57.720944  250756 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-2ckjf" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:57.725664  250756 pod_ready.go:93] pod "coredns-6d4b75cb6d-2ckjf" in "kube-system" namespace has status "Ready":"True"
	I1216 11:35:57.725686  250756 pod_ready.go:82] duration metric: took 4.697788ms for pod "coredns-6d4b75cb6d-2ckjf" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:57.725695  250756 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:57.729559  250756 pod_ready.go:93] pod "etcd-test-preload-741456" in "kube-system" namespace has status "Ready":"True"
	I1216 11:35:57.729579  250756 pod_ready.go:82] duration metric: took 3.878373ms for pod "etcd-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:57.729587  250756 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:57.733598  250756 pod_ready.go:93] pod "kube-apiserver-test-preload-741456" in "kube-system" namespace has status "Ready":"True"
	I1216 11:35:57.733631  250756 pod_ready.go:82] duration metric: took 4.038301ms for pod "kube-apiserver-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:57.733641  250756 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:57.741331  250756 pod_ready.go:93] pod "kube-controller-manager-test-preload-741456" in "kube-system" namespace has status "Ready":"True"
	I1216 11:35:57.741365  250756 pod_ready.go:82] duration metric: took 7.707054ms for pod "kube-controller-manager-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:57.741378  250756 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g4jkk" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:58.116497  250756 pod_ready.go:93] pod "kube-proxy-g4jkk" in "kube-system" namespace has status "Ready":"True"
	I1216 11:35:58.116524  250756 pod_ready.go:82] duration metric: took 375.138702ms for pod "kube-proxy-g4jkk" in "kube-system" namespace to be "Ready" ...
	I1216 11:35:58.116534  250756 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:36:00.124837  250756 pod_ready.go:103] pod "kube-scheduler-test-preload-741456" in "kube-system" namespace has status "Ready":"False"
	I1216 11:36:02.623469  250756 pod_ready.go:93] pod "kube-scheduler-test-preload-741456" in "kube-system" namespace has status "Ready":"True"
	I1216 11:36:02.623498  250756 pod_ready.go:82] duration metric: took 4.506955469s for pod "kube-scheduler-test-preload-741456" in "kube-system" namespace to be "Ready" ...
	I1216 11:36:02.623514  250756 pod_ready.go:39] duration metric: took 4.90753103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 11:36:02.623535  250756 api_server.go:52] waiting for apiserver process to appear ...
	I1216 11:36:02.623618  250756 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:36:02.637808  250756 api_server.go:72] duration metric: took 12.116790975s to wait for apiserver process to appear ...
	I1216 11:36:02.637837  250756 api_server.go:88] waiting for apiserver healthz status ...
	I1216 11:36:02.637857  250756 api_server.go:253] Checking apiserver healthz at https://192.168.39.107:8443/healthz ...
	I1216 11:36:02.644334  250756 api_server.go:279] https://192.168.39.107:8443/healthz returned 200:
	ok
	I1216 11:36:02.645594  250756 api_server.go:141] control plane version: v1.24.4
	I1216 11:36:02.645614  250756 api_server.go:131] duration metric: took 7.770022ms to wait for apiserver health ...
	I1216 11:36:02.645622  250756 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 11:36:02.650696  250756 system_pods.go:59] 7 kube-system pods found
	I1216 11:36:02.650730  250756 system_pods.go:61] "coredns-6d4b75cb6d-2ckjf" [45c0b538-c116-4984-91af-480902037a54] Running
	I1216 11:36:02.650740  250756 system_pods.go:61] "etcd-test-preload-741456" [ac3c7b62-c9c8-4c04-9595-0d601daf7307] Running
	I1216 11:36:02.650747  250756 system_pods.go:61] "kube-apiserver-test-preload-741456" [099aa3a2-24c6-4fa5-b934-be85e3a4be47] Running
	I1216 11:36:02.650769  250756 system_pods.go:61] "kube-controller-manager-test-preload-741456" [0ce987b2-8ba5-43bf-94b4-402128ccb9b1] Running
	I1216 11:36:02.650779  250756 system_pods.go:61] "kube-proxy-g4jkk" [1bae6021-c773-4ea2-b2b0-1eb6f6ae25ea] Running
	I1216 11:36:02.650788  250756 system_pods.go:61] "kube-scheduler-test-preload-741456" [e8894996-1f12-4e80-bd49-25a17d9fac3d] Running
	I1216 11:36:02.650796  250756 system_pods.go:61] "storage-provisioner" [8145521c-8750-4253-83d9-4ecd8e325a44] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 11:36:02.650813  250756 system_pods.go:74] duration metric: took 5.184396ms to wait for pod list to return data ...
	I1216 11:36:02.650832  250756 default_sa.go:34] waiting for default service account to be created ...
	I1216 11:36:02.653096  250756 default_sa.go:45] found service account: "default"
	I1216 11:36:02.653119  250756 default_sa.go:55] duration metric: took 2.280549ms for default service account to be created ...
	I1216 11:36:02.653126  250756 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 11:36:02.718712  250756 system_pods.go:86] 7 kube-system pods found
	I1216 11:36:02.718742  250756 system_pods.go:89] "coredns-6d4b75cb6d-2ckjf" [45c0b538-c116-4984-91af-480902037a54] Running
	I1216 11:36:02.718748  250756 system_pods.go:89] "etcd-test-preload-741456" [ac3c7b62-c9c8-4c04-9595-0d601daf7307] Running
	I1216 11:36:02.718752  250756 system_pods.go:89] "kube-apiserver-test-preload-741456" [099aa3a2-24c6-4fa5-b934-be85e3a4be47] Running
	I1216 11:36:02.718756  250756 system_pods.go:89] "kube-controller-manager-test-preload-741456" [0ce987b2-8ba5-43bf-94b4-402128ccb9b1] Running
	I1216 11:36:02.718759  250756 system_pods.go:89] "kube-proxy-g4jkk" [1bae6021-c773-4ea2-b2b0-1eb6f6ae25ea] Running
	I1216 11:36:02.718762  250756 system_pods.go:89] "kube-scheduler-test-preload-741456" [e8894996-1f12-4e80-bd49-25a17d9fac3d] Running
	I1216 11:36:02.718768  250756 system_pods.go:89] "storage-provisioner" [8145521c-8750-4253-83d9-4ecd8e325a44] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 11:36:02.718778  250756 system_pods.go:126] duration metric: took 65.64625ms to wait for k8s-apps to be running ...
	I1216 11:36:02.718790  250756 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 11:36:02.718851  250756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:36:02.732750  250756 system_svc.go:56] duration metric: took 13.947725ms WaitForService to wait for kubelet
	I1216 11:36:02.732789  250756 kubeadm.go:582] duration metric: took 12.211780535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 11:36:02.732810  250756 node_conditions.go:102] verifying NodePressure condition ...
	I1216 11:36:02.916262  250756 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 11:36:02.916288  250756 node_conditions.go:123] node cpu capacity is 2
	I1216 11:36:02.916298  250756 node_conditions.go:105] duration metric: took 183.483234ms to run NodePressure ...
	I1216 11:36:02.916323  250756 start.go:241] waiting for startup goroutines ...
	I1216 11:36:02.916331  250756 start.go:246] waiting for cluster config update ...
	I1216 11:36:02.916342  250756 start.go:255] writing updated cluster config ...
	I1216 11:36:02.916588  250756 ssh_runner.go:195] Run: rm -f paused
	I1216 11:36:02.964540  250756 start.go:600] kubectl: 1.32.0, cluster: 1.24.4 (minor skew: 8)
	I1216 11:36:02.966381  250756 out.go:201] 
	W1216 11:36:02.967737  250756 out.go:270] ! /usr/local/bin/kubectl is version 1.32.0, which may have incompatibilities with Kubernetes 1.24.4.
	I1216 11:36:02.969134  250756 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1216 11:36:02.970439  250756 out.go:177] * Done! kubectl is now configured to use "test-preload-741456" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.861792575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348963861769095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2950cae5-2bb7-4d0e-bc51-927de66bedf3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.862368074Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9270c33-786e-4ad6-a147-838be040790c name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.862453951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9270c33-786e-4ad6-a147-838be040790c name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.862670829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a033d31861b67c4c27240071822ed3053f45b382df63683b8d606c6f6a4f671,PodSandboxId:6b354ff0d047b20ac3ed93ad97af241da7c3861c1e7972830abe37cb8e7931bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734348955423386390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2ckjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c0b538-c116-4984-91af-480902037a54,},Annotations:map[string]string{io.kubernetes.container.hash: 6839e92d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138051e545d0f592c8436adb2426a7b5522f7187ecb353e86f207f4ec558a9ba,PodSandboxId:684dfb9dd7b9fb88c372302ef8a499c83de04d59434ec77a8aeafaf0d8b01bff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734348948614970170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4jkk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1bae6021-c773-4ea2-b2b0-1eb6f6ae25ea,},Annotations:map[string]string{io.kubernetes.container.hash: f3a4f027,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ead9e0f9a1a6c6780a78f21e9210b92aa41c4dda87bebd259eb2abb0236349e,PodSandboxId:b0a07f16d0b3e4434ffde1fd3141d6102177d6e4c046344e388f4f18bba22d46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1734348948457106769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814
5521c-8750-4253-83d9-4ecd8e325a44,},Annotations:map[string]string{io.kubernetes.container.hash: 76075982,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f5127b6c0117f961d0afa506276106502ea3e95e15cce01e4f9545aa69e311,PodSandboxId:e7ea9a3d16be9a6e984cf5588ca2870fa9fb357c6b6c57aeaf1576e2be3b46b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734348943093496738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e3149812a7be3a4cc0b00e130c9da0,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc5dd663,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cad742e38a0f696a1f64869ae7c571d407ddf9cba4332c3971d72fb21634a8be,PodSandboxId:03baf9a6e832b159ba61db7e60c35a0e0d851f04c197b9df2d2d6ddbd6ea88ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734348943011179006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1a282fb4d2dcfcbfb949b6cb980300c,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85274011d03a9392aac1e165b2777f4a2b0f2cb1c3e8f37834b1438d7212817,PodSandboxId:dbf0ea18033c4338bfa84b693325a846507c185a6bf1f972fe60c2ec0fe8d2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734348942979409363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cd76307c81f780628274d6f81a9819c,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b947508ddb103f80ef6f7810a6b1a28a8d2de5ce09fe0ea65566e70c382e3aa,PodSandboxId:1c765e11cd152dd0db586e4026d4be1cfa62425f91913e7455cc3f27d4c9cb53,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734348942977320316,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5360c6451a801366d3f99d87dc0332e2,},Annotations
:map[string]string{io.kubernetes.container.hash: 1a6aa8b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9270c33-786e-4ad6-a147-838be040790c name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.898004133Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afd345d8-73bb-4325-b437-52310edd6367 name=/runtime.v1.RuntimeService/Version
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.898079685Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afd345d8-73bb-4325-b437-52310edd6367 name=/runtime.v1.RuntimeService/Version
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.899283888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43f4c689-7530-459a-b239-659bbe7a146e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.899821857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348963899799164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43f4c689-7530-459a-b239-659bbe7a146e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.900464383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f70049ea-69bc-4a5a-8815-dfe472185761 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.900530123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f70049ea-69bc-4a5a-8815-dfe472185761 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.900693287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a033d31861b67c4c27240071822ed3053f45b382df63683b8d606c6f6a4f671,PodSandboxId:6b354ff0d047b20ac3ed93ad97af241da7c3861c1e7972830abe37cb8e7931bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734348955423386390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2ckjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c0b538-c116-4984-91af-480902037a54,},Annotations:map[string]string{io.kubernetes.container.hash: 6839e92d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138051e545d0f592c8436adb2426a7b5522f7187ecb353e86f207f4ec558a9ba,PodSandboxId:684dfb9dd7b9fb88c372302ef8a499c83de04d59434ec77a8aeafaf0d8b01bff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734348948614970170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4jkk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1bae6021-c773-4ea2-b2b0-1eb6f6ae25ea,},Annotations:map[string]string{io.kubernetes.container.hash: f3a4f027,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ead9e0f9a1a6c6780a78f21e9210b92aa41c4dda87bebd259eb2abb0236349e,PodSandboxId:b0a07f16d0b3e4434ffde1fd3141d6102177d6e4c046344e388f4f18bba22d46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1734348948457106769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814
5521c-8750-4253-83d9-4ecd8e325a44,},Annotations:map[string]string{io.kubernetes.container.hash: 76075982,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f5127b6c0117f961d0afa506276106502ea3e95e15cce01e4f9545aa69e311,PodSandboxId:e7ea9a3d16be9a6e984cf5588ca2870fa9fb357c6b6c57aeaf1576e2be3b46b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734348943093496738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e3149812a7be3a4cc0b00e130c9da0,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc5dd663,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cad742e38a0f696a1f64869ae7c571d407ddf9cba4332c3971d72fb21634a8be,PodSandboxId:03baf9a6e832b159ba61db7e60c35a0e0d851f04c197b9df2d2d6ddbd6ea88ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734348943011179006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1a282fb4d2dcfcbfb949b6cb980300c,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85274011d03a9392aac1e165b2777f4a2b0f2cb1c3e8f37834b1438d7212817,PodSandboxId:dbf0ea18033c4338bfa84b693325a846507c185a6bf1f972fe60c2ec0fe8d2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734348942979409363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cd76307c81f780628274d6f81a9819c,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b947508ddb103f80ef6f7810a6b1a28a8d2de5ce09fe0ea65566e70c382e3aa,PodSandboxId:1c765e11cd152dd0db586e4026d4be1cfa62425f91913e7455cc3f27d4c9cb53,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734348942977320316,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5360c6451a801366d3f99d87dc0332e2,},Annotations
:map[string]string{io.kubernetes.container.hash: 1a6aa8b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f70049ea-69bc-4a5a-8815-dfe472185761 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.933812091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04453bd3-da77-4224-87f8-ebe9dfc3750f name=/runtime.v1.RuntimeService/Version
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.933885885Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04453bd3-da77-4224-87f8-ebe9dfc3750f name=/runtime.v1.RuntimeService/Version
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.934865242Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68a6dbde-15d3-4f5f-b39b-d7ce60f1b9d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.935291001Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348963935271277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68a6dbde-15d3-4f5f-b39b-d7ce60f1b9d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.935730429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4d5901f2-ce8a-4b16-83ba-a69472efb254 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.935777445Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4d5901f2-ce8a-4b16-83ba-a69472efb254 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.935963656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a033d31861b67c4c27240071822ed3053f45b382df63683b8d606c6f6a4f671,PodSandboxId:6b354ff0d047b20ac3ed93ad97af241da7c3861c1e7972830abe37cb8e7931bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734348955423386390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2ckjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c0b538-c116-4984-91af-480902037a54,},Annotations:map[string]string{io.kubernetes.container.hash: 6839e92d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138051e545d0f592c8436adb2426a7b5522f7187ecb353e86f207f4ec558a9ba,PodSandboxId:684dfb9dd7b9fb88c372302ef8a499c83de04d59434ec77a8aeafaf0d8b01bff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734348948614970170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4jkk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1bae6021-c773-4ea2-b2b0-1eb6f6ae25ea,},Annotations:map[string]string{io.kubernetes.container.hash: f3a4f027,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ead9e0f9a1a6c6780a78f21e9210b92aa41c4dda87bebd259eb2abb0236349e,PodSandboxId:b0a07f16d0b3e4434ffde1fd3141d6102177d6e4c046344e388f4f18bba22d46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1734348948457106769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814
5521c-8750-4253-83d9-4ecd8e325a44,},Annotations:map[string]string{io.kubernetes.container.hash: 76075982,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f5127b6c0117f961d0afa506276106502ea3e95e15cce01e4f9545aa69e311,PodSandboxId:e7ea9a3d16be9a6e984cf5588ca2870fa9fb357c6b6c57aeaf1576e2be3b46b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734348943093496738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e3149812a7be3a4cc0b00e130c9da0,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc5dd663,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cad742e38a0f696a1f64869ae7c571d407ddf9cba4332c3971d72fb21634a8be,PodSandboxId:03baf9a6e832b159ba61db7e60c35a0e0d851f04c197b9df2d2d6ddbd6ea88ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734348943011179006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1a282fb4d2dcfcbfb949b6cb980300c,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85274011d03a9392aac1e165b2777f4a2b0f2cb1c3e8f37834b1438d7212817,PodSandboxId:dbf0ea18033c4338bfa84b693325a846507c185a6bf1f972fe60c2ec0fe8d2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734348942979409363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cd76307c81f780628274d6f81a9819c,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b947508ddb103f80ef6f7810a6b1a28a8d2de5ce09fe0ea65566e70c382e3aa,PodSandboxId:1c765e11cd152dd0db586e4026d4be1cfa62425f91913e7455cc3f27d4c9cb53,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734348942977320316,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5360c6451a801366d3f99d87dc0332e2,},Annotations
:map[string]string{io.kubernetes.container.hash: 1a6aa8b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4d5901f2-ce8a-4b16-83ba-a69472efb254 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.968291300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69c4b60e-2ea3-4ee7-8e25-599fca5e99e3 name=/runtime.v1.RuntimeService/Version
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.968364400Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69c4b60e-2ea3-4ee7-8e25-599fca5e99e3 name=/runtime.v1.RuntimeService/Version
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.969482263Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cc0a8efe-eedc-4d13-9f45-85c786a09c78 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.969904284Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348963969883397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cc0a8efe-eedc-4d13-9f45-85c786a09c78 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.970547358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c36ffce1-5359-4638-a40b-b087efe84fb5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.970644596Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c36ffce1-5359-4638-a40b-b087efe84fb5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 11:36:03 test-preload-741456 crio[688]: time="2024-12-16 11:36:03.970817850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7a033d31861b67c4c27240071822ed3053f45b382df63683b8d606c6f6a4f671,PodSandboxId:6b354ff0d047b20ac3ed93ad97af241da7c3861c1e7972830abe37cb8e7931bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1734348955423386390,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2ckjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45c0b538-c116-4984-91af-480902037a54,},Annotations:map[string]string{io.kubernetes.container.hash: 6839e92d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138051e545d0f592c8436adb2426a7b5522f7187ecb353e86f207f4ec558a9ba,PodSandboxId:684dfb9dd7b9fb88c372302ef8a499c83de04d59434ec77a8aeafaf0d8b01bff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1734348948614970170,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g4jkk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1bae6021-c773-4ea2-b2b0-1eb6f6ae25ea,},Annotations:map[string]string{io.kubernetes.container.hash: f3a4f027,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ead9e0f9a1a6c6780a78f21e9210b92aa41c4dda87bebd259eb2abb0236349e,PodSandboxId:b0a07f16d0b3e4434ffde1fd3141d6102177d6e4c046344e388f4f18bba22d46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1734348948457106769,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 814
5521c-8750-4253-83d9-4ecd8e325a44,},Annotations:map[string]string{io.kubernetes.container.hash: 76075982,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91f5127b6c0117f961d0afa506276106502ea3e95e15cce01e4f9545aa69e311,PodSandboxId:e7ea9a3d16be9a6e984cf5588ca2870fa9fb357c6b6c57aeaf1576e2be3b46b2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1734348943093496738,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5e3149812a7be3a4cc0b00e130c9da0,},Annot
ations:map[string]string{io.kubernetes.container.hash: dc5dd663,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cad742e38a0f696a1f64869ae7c571d407ddf9cba4332c3971d72fb21634a8be,PodSandboxId:03baf9a6e832b159ba61db7e60c35a0e0d851f04c197b9df2d2d6ddbd6ea88ad,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1734348943011179006,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1a282fb4d2dcfcbfb949b6cb980300c,},Annotations:map[
string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e85274011d03a9392aac1e165b2777f4a2b0f2cb1c3e8f37834b1438d7212817,PodSandboxId:dbf0ea18033c4338bfa84b693325a846507c185a6bf1f972fe60c2ec0fe8d2db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1734348942979409363,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cd76307c81f780628274d6f81a9819c,},
Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b947508ddb103f80ef6f7810a6b1a28a8d2de5ce09fe0ea65566e70c382e3aa,PodSandboxId:1c765e11cd152dd0db586e4026d4be1cfa62425f91913e7455cc3f27d4c9cb53,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1734348942977320316,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-741456,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5360c6451a801366d3f99d87dc0332e2,},Annotations
:map[string]string{io.kubernetes.container.hash: 1a6aa8b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c36ffce1-5359-4638-a40b-b087efe84fb5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7a033d31861b6       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   6b354ff0d047b       coredns-6d4b75cb6d-2ckjf
	138051e545d0f       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   684dfb9dd7b9f       kube-proxy-g4jkk
	9ead9e0f9a1a6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       2                   b0a07f16d0b3e       storage-provisioner
	91f5127b6c011       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   e7ea9a3d16be9       etcd-test-preload-741456
	cad742e38a0f6       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   03baf9a6e832b       kube-scheduler-test-preload-741456
	e85274011d03a       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   dbf0ea18033c4       kube-controller-manager-test-preload-741456
	4b947508ddb10       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   1c765e11cd152       kube-apiserver-test-preload-741456
	
	
	==> coredns [7a033d31861b67c4c27240071822ed3053f45b382df63683b8d606c6f6a4f671] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:32825 - 54690 "HINFO IN 150501215468423085.6036968122712364839. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.103938954s
	
	
	==> describe nodes <==
	Name:               test-preload-741456
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-741456
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8
	                    minikube.k8s.io/name=test-preload-741456
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T11_34_34_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 11:34:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-741456
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 11:35:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 11:35:57 +0000   Mon, 16 Dec 2024 11:34:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 11:35:57 +0000   Mon, 16 Dec 2024 11:34:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 11:35:57 +0000   Mon, 16 Dec 2024 11:34:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 11:35:57 +0000   Mon, 16 Dec 2024 11:35:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    test-preload-741456
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3fff2de3e1564c03908f9dd292c516a7
	  System UUID:                3fff2de3-e156-4c03-908f-9dd292c516a7
	  Boot ID:                    cdfefa77-e595-46a5-be04-49a190ef1723
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2ckjf                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     77s
	  kube-system                 etcd-test-preload-741456                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         90s
	  kube-system                 kube-apiserver-test-preload-741456             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-test-preload-741456    200m (10%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-proxy-g4jkk                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-test-preload-741456             100m (5%)     0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 74s                kube-proxy       
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node test-preload-741456 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node test-preload-741456 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s                kubelet          Node test-preload-741456 status is now: NodeHasSufficientPID
	  Normal  NodeReady                80s                kubelet          Node test-preload-741456 status is now: NodeReady
	  Normal  RegisteredNode           78s                node-controller  Node test-preload-741456 event: Registered Node test-preload-741456 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-741456 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-741456 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-741456 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-741456 event: Registered Node test-preload-741456 in Controller
	
	
	==> dmesg <==
	[Dec16 11:35] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048740] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037415] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.831791] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.902686] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.566552] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.924502] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.062197] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057805] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.168060] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.137702] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.240539] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[ +12.939843] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[  +0.055915] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.623975] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +5.862693] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.600604] systemd-fstab-generator[1835]: Ignoring "noauto" option for root device
	[  +4.665727] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [91f5127b6c0117f961d0afa506276106502ea3e95e15cce01e4f9545aa69e311] <==
	{"level":"info","ts":"2024-12-16T11:35:43.509Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ec1614c5c0f7335e","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-16T11:35:43.547Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-16T11:35:43.551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e switched to configuration voters=(17011807482017166174)"}
	{"level":"info","ts":"2024-12-16T11:35:43.551Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","added-peer-id":"ec1614c5c0f7335e","added-peer-peer-urls":["https://192.168.39.107:2380"]}
	{"level":"info","ts":"2024-12-16T11:35:43.552Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1d5c088f9986766d","local-member-id":"ec1614c5c0f7335e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T11:35:43.552Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T11:35:43.554Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-16T11:35:43.554Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ec1614c5c0f7335e","initial-advertise-peer-urls":["https://192.168.39.107:2380"],"listen-peer-urls":["https://192.168.39.107:2380"],"advertise-client-urls":["https://192.168.39.107:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.107:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-16T11:35:43.554Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-16T11:35:43.554Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-12-16T11:35:43.554Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.107:2380"}
	{"level":"info","ts":"2024-12-16T11:35:44.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-16T11:35:44.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-16T11:35:44.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgPreVoteResp from ec1614c5c0f7335e at term 2"}
	{"level":"info","ts":"2024-12-16T11:35:44.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became candidate at term 3"}
	{"level":"info","ts":"2024-12-16T11:35:44.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e received MsgVoteResp from ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-12-16T11:35:44.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ec1614c5c0f7335e became leader at term 3"}
	{"level":"info","ts":"2024-12-16T11:35:44.973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ec1614c5c0f7335e elected leader ec1614c5c0f7335e at term 3"}
	{"level":"info","ts":"2024-12-16T11:35:44.974Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ec1614c5c0f7335e","local-member-attributes":"{Name:test-preload-741456 ClientURLs:[https://192.168.39.107:2379]}","request-path":"/0/members/ec1614c5c0f7335e/attributes","cluster-id":"1d5c088f9986766d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T11:35:44.974Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T11:35:44.976Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.107:2379"}
	{"level":"info","ts":"2024-12-16T11:35:44.976Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T11:35:44.977Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T11:35:44.977Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T11:35:44.977Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:36:04 up 0 min,  0 users,  load average: 0.81, 0.22, 0.07
	Linux test-preload-741456 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4b947508ddb103f80ef6f7810a6b1a28a8d2de5ce09fe0ea65566e70c382e3aa] <==
	I1216 11:35:47.243872       1 controller.go:85] Starting OpenAPI V3 controller
	I1216 11:35:47.243893       1 naming_controller.go:291] Starting NamingConditionController
	I1216 11:35:47.244045       1 establishing_controller.go:76] Starting EstablishingController
	I1216 11:35:47.244080       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1216 11:35:47.244097       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1216 11:35:47.244124       1 crd_finalizer.go:266] Starting CRDFinalizer
	E1216 11:35:47.313772       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1216 11:35:47.343494       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1216 11:35:47.367544       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 11:35:47.403400       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1216 11:35:47.404583       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1216 11:35:47.405069       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 11:35:47.411080       1 cache.go:39] Caches are synced for autoregister controller
	I1216 11:35:47.411355       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1216 11:35:47.411383       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1216 11:35:47.913175       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1216 11:35:48.210137       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 11:35:48.813709       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1216 11:35:49.011251       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1216 11:35:49.023853       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1216 11:35:49.061080       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1216 11:35:49.075982       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 11:35:49.081192       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 11:35:59.894041       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 11:35:59.911285       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [e85274011d03a9392aac1e165b2777f4a2b0f2cb1c3e8f37834b1438d7212817] <==
	I1216 11:35:59.896306       1 shared_informer.go:262] Caches are synced for service account
	I1216 11:35:59.898573       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1216 11:35:59.899700       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1216 11:35:59.903182       1 shared_informer.go:262] Caches are synced for persistent volume
	I1216 11:35:59.908277       1 shared_informer.go:262] Caches are synced for crt configmap
	I1216 11:35:59.919578       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1216 11:35:59.921198       1 shared_informer.go:262] Caches are synced for HPA
	I1216 11:35:59.921695       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1216 11:35:59.929539       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1216 11:35:59.930839       1 shared_informer.go:262] Caches are synced for ephemeral
	I1216 11:35:59.934848       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1216 11:35:59.943387       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1216 11:35:59.946877       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1216 11:35:59.982019       1 shared_informer.go:262] Caches are synced for namespace
	I1216 11:36:00.004979       1 shared_informer.go:262] Caches are synced for stateful set
	I1216 11:36:00.020022       1 shared_informer.go:262] Caches are synced for disruption
	I1216 11:36:00.020052       1 disruption.go:371] Sending events to api server.
	I1216 11:36:00.089861       1 shared_informer.go:262] Caches are synced for job
	I1216 11:36:00.097167       1 shared_informer.go:262] Caches are synced for cronjob
	I1216 11:36:00.107385       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1216 11:36:00.116712       1 shared_informer.go:262] Caches are synced for resource quota
	I1216 11:36:00.144952       1 shared_informer.go:262] Caches are synced for resource quota
	I1216 11:36:00.573122       1 shared_informer.go:262] Caches are synced for garbage collector
	I1216 11:36:00.573207       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1216 11:36:00.577386       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [138051e545d0f592c8436adb2426a7b5522f7187ecb353e86f207f4ec558a9ba] <==
	I1216 11:35:48.770964       1 node.go:163] Successfully retrieved node IP: 192.168.39.107
	I1216 11:35:48.771059       1 server_others.go:138] "Detected node IP" address="192.168.39.107"
	I1216 11:35:48.771089       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1216 11:35:48.801924       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1216 11:35:48.801954       1 server_others.go:206] "Using iptables Proxier"
	I1216 11:35:48.802322       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1216 11:35:48.803041       1 server.go:661] "Version info" version="v1.24.4"
	I1216 11:35:48.803077       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 11:35:48.804953       1 config.go:226] "Starting endpoint slice config controller"
	I1216 11:35:48.805281       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1216 11:35:48.806149       1 config.go:317] "Starting service config controller"
	I1216 11:35:48.806173       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1216 11:35:48.807337       1 config.go:444] "Starting node config controller"
	I1216 11:35:48.807394       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1216 11:35:48.906210       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1216 11:35:48.906718       1 shared_informer.go:262] Caches are synced for service config
	I1216 11:35:48.908548       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [cad742e38a0f696a1f64869ae7c571d407ddf9cba4332c3971d72fb21634a8be] <==
	I1216 11:35:44.076234       1 serving.go:348] Generated self-signed cert in-memory
	W1216 11:35:47.262537       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 11:35:47.262824       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 11:35:47.262918       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 11:35:47.262945       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 11:35:47.328621       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1216 11:35:47.328742       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 11:35:47.339760       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1216 11:35:47.339999       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 11:35:47.340041       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 11:35:47.340083       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1216 11:35:47.440925       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: I1216 11:35:47.484539    1145 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdzqj\" (UniqueName: \"kubernetes.io/projected/57613bc1-db63-425d-a411-ada86eed3b73-kube-api-access-bdzqj\") pod \"57613bc1-db63-425d-a411-ada86eed3b73\" (UID: \"57613bc1-db63-425d-a411-ada86eed3b73\") "
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: I1216 11:35:47.484607    1145 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57613bc1-db63-425d-a411-ada86eed3b73-config-volume\") pod \"57613bc1-db63-425d-a411-ada86eed3b73\" (UID: \"57613bc1-db63-425d-a411-ada86eed3b73\") "
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: E1216 11:35:47.485563    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: E1216 11:35:47.485815    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/45c0b538-c116-4984-91af-480902037a54-config-volume podName:45c0b538-c116-4984-91af-480902037a54 nodeName:}" failed. No retries permitted until 2024-12-16 11:35:47.985793766 +0000 UTC m=+5.795216831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/45c0b538-c116-4984-91af-480902037a54-config-volume") pod "coredns-6d4b75cb6d-2ckjf" (UID: "45c0b538-c116-4984-91af-480902037a54") : object "kube-system"/"coredns" not registered
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: W1216 11:35:47.486437    1145 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/57613bc1-db63-425d-a411-ada86eed3b73/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: W1216 11:35:47.486648    1145 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/57613bc1-db63-425d-a411-ada86eed3b73/volumes/kubernetes.io~projected/kube-api-access-bdzqj: clearQuota called, but quotas disabled
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: I1216 11:35:47.486819    1145 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57613bc1-db63-425d-a411-ada86eed3b73-kube-api-access-bdzqj" (OuterVolumeSpecName: "kube-api-access-bdzqj") pod "57613bc1-db63-425d-a411-ada86eed3b73" (UID: "57613bc1-db63-425d-a411-ada86eed3b73"). InnerVolumeSpecName "kube-api-access-bdzqj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: I1216 11:35:47.486943    1145 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57613bc1-db63-425d-a411-ada86eed3b73-config-volume" (OuterVolumeSpecName: "config-volume") pod "57613bc1-db63-425d-a411-ada86eed3b73" (UID: "57613bc1-db63-425d-a411-ada86eed3b73"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: I1216 11:35:47.585137    1145 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57613bc1-db63-425d-a411-ada86eed3b73-config-volume\") on node \"test-preload-741456\" DevicePath \"\""
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: I1216 11:35:47.585175    1145 reconciler.go:384] "Volume detached for volume \"kube-api-access-bdzqj\" (UniqueName: \"kubernetes.io/projected/57613bc1-db63-425d-a411-ada86eed3b73-kube-api-access-bdzqj\") on node \"test-preload-741456\" DevicePath \"\""
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: E1216 11:35:47.990741    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 11:35:47 test-preload-741456 kubelet[1145]: E1216 11:35:47.990802    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/45c0b538-c116-4984-91af-480902037a54-config-volume podName:45c0b538-c116-4984-91af-480902037a54 nodeName:}" failed. No retries permitted until 2024-12-16 11:35:48.990788217 +0000 UTC m=+6.800211266 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/45c0b538-c116-4984-91af-480902037a54-config-volume") pod "coredns-6d4b75cb6d-2ckjf" (UID: "45c0b538-c116-4984-91af-480902037a54") : object "kube-system"/"coredns" not registered
	Dec 16 11:35:48 test-preload-741456 kubelet[1145]: I1216 11:35:48.446519    1145 scope.go:110] "RemoveContainer" containerID="3336a0fce20de2335304ce742b2d015e04b9822d4bfd65aef9ed80fe45157276"
	Dec 16 11:35:48 test-preload-741456 kubelet[1145]: E1216 11:35:48.996448    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 11:35:48 test-preload-741456 kubelet[1145]: E1216 11:35:48.996514    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/45c0b538-c116-4984-91af-480902037a54-config-volume podName:45c0b538-c116-4984-91af-480902037a54 nodeName:}" failed. No retries permitted until 2024-12-16 11:35:50.996499038 +0000 UTC m=+8.805922100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/45c0b538-c116-4984-91af-480902037a54-config-volume") pod "coredns-6d4b75cb6d-2ckjf" (UID: "45c0b538-c116-4984-91af-480902037a54") : object "kube-system"/"coredns" not registered
	Dec 16 11:35:49 test-preload-741456 kubelet[1145]: E1216 11:35:49.412008    1145 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-2ckjf" podUID=45c0b538-c116-4984-91af-480902037a54
	Dec 16 11:35:49 test-preload-741456 kubelet[1145]: I1216 11:35:49.452941    1145 scope.go:110] "RemoveContainer" containerID="3336a0fce20de2335304ce742b2d015e04b9822d4bfd65aef9ed80fe45157276"
	Dec 16 11:35:49 test-preload-741456 kubelet[1145]: I1216 11:35:49.453289    1145 scope.go:110] "RemoveContainer" containerID="9ead9e0f9a1a6c6780a78f21e9210b92aa41c4dda87bebd259eb2abb0236349e"
	Dec 16 11:35:49 test-preload-741456 kubelet[1145]: E1216 11:35:49.453522    1145 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8145521c-8750-4253-83d9-4ecd8e325a44)\"" pod="kube-system/storage-provisioner" podUID=8145521c-8750-4253-83d9-4ecd8e325a44
	Dec 16 11:35:50 test-preload-741456 kubelet[1145]: I1216 11:35:50.417895    1145 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=57613bc1-db63-425d-a411-ada86eed3b73 path="/var/lib/kubelet/pods/57613bc1-db63-425d-a411-ada86eed3b73/volumes"
	Dec 16 11:35:50 test-preload-741456 kubelet[1145]: I1216 11:35:50.461083    1145 scope.go:110] "RemoveContainer" containerID="9ead9e0f9a1a6c6780a78f21e9210b92aa41c4dda87bebd259eb2abb0236349e"
	Dec 16 11:35:50 test-preload-741456 kubelet[1145]: E1216 11:35:50.461809    1145 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8145521c-8750-4253-83d9-4ecd8e325a44)\"" pod="kube-system/storage-provisioner" podUID=8145521c-8750-4253-83d9-4ecd8e325a44
	Dec 16 11:35:51 test-preload-741456 kubelet[1145]: E1216 11:35:51.008700    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 11:35:51 test-preload-741456 kubelet[1145]: E1216 11:35:51.008790    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/45c0b538-c116-4984-91af-480902037a54-config-volume podName:45c0b538-c116-4984-91af-480902037a54 nodeName:}" failed. No retries permitted until 2024-12-16 11:35:55.008773374 +0000 UTC m=+12.818196439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/45c0b538-c116-4984-91af-480902037a54-config-volume") pod "coredns-6d4b75cb6d-2ckjf" (UID: "45c0b538-c116-4984-91af-480902037a54") : object "kube-system"/"coredns" not registered
	Dec 16 11:35:51 test-preload-741456 kubelet[1145]: E1216 11:35:51.411801    1145 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-2ckjf" podUID=45c0b538-c116-4984-91af-480902037a54
	
	
	==> storage-provisioner [9ead9e0f9a1a6c6780a78f21e9210b92aa41c4dda87bebd259eb2abb0236349e] <==
	I1216 11:35:48.523245       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1216 11:35:48.524744       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-741456 -n test-preload-741456
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-741456 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-741456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-741456
--- FAIL: TestPreload (163.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (438.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854528 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1216 11:41:06.845740  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-854528 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m7.147691498s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-854528] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-854528" primary control-plane node in "kubernetes-upgrade-854528" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:40:51.080521  257111 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:40:51.080676  257111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:40:51.080689  257111 out.go:358] Setting ErrFile to fd 2...
	I1216 11:40:51.080695  257111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:40:51.080993  257111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:40:51.081879  257111 out.go:352] Setting JSON to false
	I1216 11:40:51.083315  257111 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12198,"bootTime":1734337053,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 11:40:51.083495  257111 start.go:139] virtualization: kvm guest
	I1216 11:40:51.086021  257111 out.go:177] * [kubernetes-upgrade-854528] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 11:40:51.087737  257111 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:40:51.087732  257111 notify.go:220] Checking for updates...
	I1216 11:40:51.090736  257111 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:40:51.092219  257111 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:40:51.093502  257111 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:40:51.094765  257111 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 11:40:51.096036  257111 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:40:51.097838  257111 config.go:182] Loaded profile config "NoKubernetes-911686": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1216 11:40:51.097935  257111 config.go:182] Loaded profile config "cert-expiration-002454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:40:51.098027  257111 config.go:182] Loaded profile config "running-upgrade-446525": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1216 11:40:51.098123  257111 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:40:51.136636  257111 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 11:40:51.138058  257111 start.go:297] selected driver: kvm2
	I1216 11:40:51.138078  257111 start.go:901] validating driver "kvm2" against <nil>
	I1216 11:40:51.138091  257111 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:40:51.138986  257111 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:40:51.139108  257111 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 11:40:51.156389  257111 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 11:40:51.156445  257111 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 11:40:51.156768  257111 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 11:40:51.156798  257111 cni.go:84] Creating CNI manager for ""
	I1216 11:40:51.156850  257111 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:40:51.156863  257111 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 11:40:51.156913  257111 start.go:340] cluster config:
	{Name:kubernetes-upgrade-854528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-854528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:40:51.157107  257111 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:40:51.160678  257111 out.go:177] * Starting "kubernetes-upgrade-854528" primary control-plane node in "kubernetes-upgrade-854528" cluster
	I1216 11:40:51.162069  257111 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 11:40:51.162145  257111 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 11:40:51.162155  257111 cache.go:56] Caching tarball of preloaded images
	I1216 11:40:51.162252  257111 preload.go:172] Found /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 11:40:51.162268  257111 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 11:40:51.162356  257111 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/config.json ...
	I1216 11:40:51.162378  257111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/config.json: {Name:mk87ede3be72ba8cb7aa77a3e02d5227eedeb7c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:40:51.162527  257111 start.go:360] acquireMachinesLock for kubernetes-upgrade-854528: {Name:mk416695f38f04563a2ca155c5383ee4d8f0f97b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 11:41:26.734652  257111 start.go:364] duration metric: took 35.572080064s to acquireMachinesLock for "kubernetes-upgrade-854528"
	I1216 11:41:26.734743  257111 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-854528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-854528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 11:41:26.734886  257111 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 11:41:26.736467  257111 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 11:41:26.736816  257111 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:41:26.736862  257111 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:41:26.757324  257111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34469
	I1216 11:41:26.757875  257111 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:41:26.758517  257111 main.go:141] libmachine: Using API Version  1
	I1216 11:41:26.758548  257111 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:41:26.759074  257111 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:41:26.759329  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetMachineName
	I1216 11:41:26.759498  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:41:26.759720  257111 start.go:159] libmachine.API.Create for "kubernetes-upgrade-854528" (driver="kvm2")
	I1216 11:41:26.759759  257111 client.go:168] LocalClient.Create starting
	I1216 11:41:26.759802  257111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem
	I1216 11:41:26.759852  257111 main.go:141] libmachine: Decoding PEM data...
	I1216 11:41:26.759876  257111 main.go:141] libmachine: Parsing certificate...
	I1216 11:41:26.759962  257111 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem
	I1216 11:41:26.760004  257111 main.go:141] libmachine: Decoding PEM data...
	I1216 11:41:26.760019  257111 main.go:141] libmachine: Parsing certificate...
	I1216 11:41:26.760050  257111 main.go:141] libmachine: Running pre-create checks...
	I1216 11:41:26.760063  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .PreCreateCheck
	I1216 11:41:26.761951  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetConfigRaw
	I1216 11:41:26.762491  257111 main.go:141] libmachine: Creating machine...
	I1216 11:41:26.762514  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .Create
	I1216 11:41:26.762698  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) creating KVM machine...
	I1216 11:41:26.762730  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) creating network...
	I1216 11:41:26.764124  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found existing default KVM network
	I1216 11:41:26.766140  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:26.765938  257482 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:8a:33} reservation:<nil>}
	I1216 11:41:26.767286  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:26.767187  257482 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f3:28:ed} reservation:<nil>}
	I1216 11:41:26.769043  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:26.768905  257482 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002af4e0}
	I1216 11:41:26.769081  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | created network xml: 
	I1216 11:41:26.769097  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | <network>
	I1216 11:41:26.769106  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG |   <name>mk-kubernetes-upgrade-854528</name>
	I1216 11:41:26.769121  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG |   <dns enable='no'/>
	I1216 11:41:26.769131  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG |   
	I1216 11:41:26.769142  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1216 11:41:26.769155  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG |     <dhcp>
	I1216 11:41:26.769222  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1216 11:41:26.769248  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG |     </dhcp>
	I1216 11:41:26.769294  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG |   </ip>
	I1216 11:41:26.769315  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG |   
	I1216 11:41:26.769328  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | </network>
	I1216 11:41:26.769335  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | 
	I1216 11:41:26.774131  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | trying to create private KVM network mk-kubernetes-upgrade-854528 192.168.61.0/24...
	I1216 11:41:26.855941  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | private KVM network mk-kubernetes-upgrade-854528 192.168.61.0/24 created
	I1216 11:41:26.856103  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) setting up store path in /home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528 ...
	I1216 11:41:26.856136  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) building disk image from file:///home/jenkins/minikube-integration/20107-210204/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1216 11:41:26.856176  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:26.856083  257482 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:41:26.856385  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Downloading /home/jenkins/minikube-integration/20107-210204/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20107-210204/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1216 11:41:27.136652  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:27.136518  257482 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/id_rsa...
	I1216 11:41:27.257864  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:27.257749  257482 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/kubernetes-upgrade-854528.rawdisk...
	I1216 11:41:27.257893  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | Writing magic tar header
	I1216 11:41:27.257916  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | Writing SSH key tar header
	I1216 11:41:27.257942  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:27.257869  257482 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528 ...
	I1216 11:41:27.257983  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528
	I1216 11:41:27.258052  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube/machines
	I1216 11:41:27.258082  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:41:27.258110  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528 (perms=drwx------)
	I1216 11:41:27.258143  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204
	I1216 11:41:27.258160  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube/machines (perms=drwxr-xr-x)
	I1216 11:41:27.258169  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1216 11:41:27.258186  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | checking permissions on dir: /home/jenkins
	I1216 11:41:27.258194  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | checking permissions on dir: /home
	I1216 11:41:27.258211  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | skipping /home - not owner
	I1216 11:41:27.258253  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube (perms=drwxr-xr-x)
	I1216 11:41:27.258281  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) setting executable bit set on /home/jenkins/minikube-integration/20107-210204 (perms=drwxrwxr-x)
	I1216 11:41:27.258336  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 11:41:27.258369  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 11:41:27.258381  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) creating domain...
	I1216 11:41:27.259396  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) define libvirt domain using xml: 
	I1216 11:41:27.259409  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) <domain type='kvm'>
	I1216 11:41:27.259440  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   <name>kubernetes-upgrade-854528</name>
	I1216 11:41:27.259467  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   <memory unit='MiB'>2200</memory>
	I1216 11:41:27.259492  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   <vcpu>2</vcpu>
	I1216 11:41:27.259504  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   <features>
	I1216 11:41:27.259516  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <acpi/>
	I1216 11:41:27.259524  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <apic/>
	I1216 11:41:27.259532  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <pae/>
	I1216 11:41:27.259543  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     
	I1216 11:41:27.259556  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   </features>
	I1216 11:41:27.259576  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   <cpu mode='host-passthrough'>
	I1216 11:41:27.259587  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   
	I1216 11:41:27.259593  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   </cpu>
	I1216 11:41:27.259602  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   <os>
	I1216 11:41:27.259609  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <type>hvm</type>
	I1216 11:41:27.259619  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <boot dev='cdrom'/>
	I1216 11:41:27.259624  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <boot dev='hd'/>
	I1216 11:41:27.259630  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <bootmenu enable='no'/>
	I1216 11:41:27.259637  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   </os>
	I1216 11:41:27.259642  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   <devices>
	I1216 11:41:27.259660  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <disk type='file' device='cdrom'>
	I1216 11:41:27.259679  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <source file='/home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/boot2docker.iso'/>
	I1216 11:41:27.259690  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <target dev='hdc' bus='scsi'/>
	I1216 11:41:27.259699  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <readonly/>
	I1216 11:41:27.259708  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     </disk>
	I1216 11:41:27.259724  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <disk type='file' device='disk'>
	I1216 11:41:27.259736  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1216 11:41:27.259751  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <source file='/home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/kubernetes-upgrade-854528.rawdisk'/>
	I1216 11:41:27.259763  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <target dev='hda' bus='virtio'/>
	I1216 11:41:27.259778  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     </disk>
	I1216 11:41:27.259789  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <interface type='network'>
	I1216 11:41:27.259801  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <source network='mk-kubernetes-upgrade-854528'/>
	I1216 11:41:27.259816  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <model type='virtio'/>
	I1216 11:41:27.259825  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     </interface>
	I1216 11:41:27.259831  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <interface type='network'>
	I1216 11:41:27.259844  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <source network='default'/>
	I1216 11:41:27.259855  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <model type='virtio'/>
	I1216 11:41:27.259867  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     </interface>
	I1216 11:41:27.259876  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <serial type='pty'>
	I1216 11:41:27.259888  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <target port='0'/>
	I1216 11:41:27.259908  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     </serial>
	I1216 11:41:27.259919  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <console type='pty'>
	I1216 11:41:27.259927  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <target type='serial' port='0'/>
	I1216 11:41:27.259939  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     </console>
	I1216 11:41:27.259950  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     <rng model='virtio'>
	I1216 11:41:27.259962  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)       <backend model='random'>/dev/random</backend>
	I1216 11:41:27.259976  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     </rng>
	I1216 11:41:27.259987  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     
	I1216 11:41:27.259994  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)     
	I1216 11:41:27.259999  257111 main.go:141] libmachine: (kubernetes-upgrade-854528)   </devices>
	I1216 11:41:27.260006  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) </domain>
	I1216 11:41:27.260020  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) 
	I1216 11:41:27.266900  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:5d:fb:74 in network default
	I1216 11:41:27.267623  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:27.267648  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) starting domain...
	I1216 11:41:27.267687  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) ensuring networks are active...
	I1216 11:41:27.268504  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Ensuring network default is active
	I1216 11:41:27.269058  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Ensuring network mk-kubernetes-upgrade-854528 is active
	I1216 11:41:27.269825  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) getting domain XML...
	I1216 11:41:27.270762  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) creating domain...
	I1216 11:41:28.637881  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) waiting for IP...
	I1216 11:41:28.638996  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:28.639435  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:28.639511  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:28.639455  257482 retry.go:31] will retry after 197.168657ms: waiting for domain to come up
	I1216 11:41:28.837837  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:28.838346  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:28.838373  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:28.838330  257482 retry.go:31] will retry after 316.095919ms: waiting for domain to come up
	I1216 11:41:29.155940  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:29.156506  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:29.156536  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:29.156483  257482 retry.go:31] will retry after 446.294995ms: waiting for domain to come up
	I1216 11:41:29.605888  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:29.606316  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:29.606345  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:29.606315  257482 retry.go:31] will retry after 411.212156ms: waiting for domain to come up
	I1216 11:41:30.018986  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:30.019600  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:30.019635  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:30.019495  257482 retry.go:31] will retry after 473.701994ms: waiting for domain to come up
	I1216 11:41:30.495235  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:30.495904  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:30.495935  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:30.495864  257482 retry.go:31] will retry after 923.349461ms: waiting for domain to come up
	I1216 11:41:31.421026  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:31.421715  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:31.421743  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:31.421695  257482 retry.go:31] will retry after 1.182359s: waiting for domain to come up
	I1216 11:41:32.605425  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:32.605865  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:32.605896  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:32.605831  257482 retry.go:31] will retry after 1.282338938s: waiting for domain to come up
	I1216 11:41:33.890424  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:33.890897  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:33.890917  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:33.890877  257482 retry.go:31] will retry after 1.196115478s: waiting for domain to come up
	I1216 11:41:35.089202  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:35.089793  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:35.089815  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:35.089767  257482 retry.go:31] will retry after 1.779460958s: waiting for domain to come up
	I1216 11:41:36.871289  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:36.871843  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:36.871873  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:36.871791  257482 retry.go:31] will retry after 2.269144461s: waiting for domain to come up
	I1216 11:41:39.144273  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:39.144764  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:39.144789  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:39.144725  257482 retry.go:31] will retry after 2.51536342s: waiting for domain to come up
	I1216 11:41:41.662145  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:41.662616  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:41.662646  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:41.662577  257482 retry.go:31] will retry after 4.37675355s: waiting for domain to come up
	I1216 11:41:46.042028  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:46.042500  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find current IP address of domain kubernetes-upgrade-854528 in network mk-kubernetes-upgrade-854528
	I1216 11:41:46.042524  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | I1216 11:41:46.042471  257482 retry.go:31] will retry after 5.372912596s: waiting for domain to come up
	I1216 11:41:51.418842  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.419351  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) found domain IP: 192.168.61.182
	I1216 11:41:51.419383  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has current primary IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.419393  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) reserving static IP address...
	I1216 11:41:51.419743  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-854528", mac: "52:54:00:b2:39:cd", ip: "192.168.61.182"} in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.498994  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | Getting to WaitForSSH function...
	I1216 11:41:51.499031  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) reserved static IP address 192.168.61.182 for domain kubernetes-upgrade-854528
	I1216 11:41:51.499044  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) waiting for SSH...
	I1216 11:41:51.501823  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.502281  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:51.502326  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.502423  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | Using SSH client type: external
	I1216 11:41:51.502452  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | Using SSH private key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/id_rsa (-rw-------)
	I1216 11:41:51.502495  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 11:41:51.502513  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | About to run SSH command:
	I1216 11:41:51.502526  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | exit 0
	I1216 11:41:51.628819  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | SSH cmd err, output: <nil>: 
	I1216 11:41:51.629158  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) KVM machine creation complete
	I1216 11:41:51.629493  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetConfigRaw
	I1216 11:41:51.630123  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:41:51.630319  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:41:51.630490  257111 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1216 11:41:51.630520  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetState
	I1216 11:41:51.631699  257111 main.go:141] libmachine: Detecting operating system of created instance...
	I1216 11:41:51.631715  257111 main.go:141] libmachine: Waiting for SSH to be available...
	I1216 11:41:51.631723  257111 main.go:141] libmachine: Getting to WaitForSSH function...
	I1216 11:41:51.631732  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:41:51.634223  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.634606  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:51.634632  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.634727  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:41:51.634921  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:51.635121  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:51.635259  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:41:51.635417  257111 main.go:141] libmachine: Using SSH client type: native
	I1216 11:41:51.635664  257111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1216 11:41:51.635676  257111 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1216 11:41:51.736272  257111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:41:51.736310  257111 main.go:141] libmachine: Detecting the provisioner...
	I1216 11:41:51.736325  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:41:51.739270  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.739631  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:51.739653  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.739813  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:41:51.740019  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:51.740200  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:51.740312  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:41:51.740435  257111 main.go:141] libmachine: Using SSH client type: native
	I1216 11:41:51.740651  257111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1216 11:41:51.740664  257111 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1216 11:41:51.846089  257111 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1216 11:41:51.846178  257111 main.go:141] libmachine: found compatible host: buildroot
	I1216 11:41:51.846194  257111 main.go:141] libmachine: Provisioning with buildroot...
	I1216 11:41:51.846202  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetMachineName
	I1216 11:41:51.846549  257111 buildroot.go:166] provisioning hostname "kubernetes-upgrade-854528"
	I1216 11:41:51.846593  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetMachineName
	I1216 11:41:51.846812  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:41:51.849619  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.849964  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:51.849993  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.850202  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:41:51.850379  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:51.850520  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:51.850628  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:41:51.850805  257111 main.go:141] libmachine: Using SSH client type: native
	I1216 11:41:51.851038  257111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1216 11:41:51.851059  257111 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-854528 && echo "kubernetes-upgrade-854528" | sudo tee /etc/hostname
	I1216 11:41:51.973971  257111 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-854528
	
	I1216 11:41:51.974028  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:41:51.977044  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.977441  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:51.977480  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:51.977676  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:41:51.977892  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:51.978086  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:51.978215  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:41:51.978413  257111 main.go:141] libmachine: Using SSH client type: native
	I1216 11:41:51.978587  257111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1216 11:41:51.978604  257111 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-854528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-854528/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-854528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 11:41:52.089289  257111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:41:52.089336  257111 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20107-210204/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-210204/.minikube}
	I1216 11:41:52.089362  257111 buildroot.go:174] setting up certificates
	I1216 11:41:52.089373  257111 provision.go:84] configureAuth start
	I1216 11:41:52.089385  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetMachineName
	I1216 11:41:52.089704  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetIP
	I1216 11:41:52.092644  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.092985  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:52.093015  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.093107  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:41:52.095074  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.095385  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:52.095422  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.095520  257111 provision.go:143] copyHostCerts
	I1216 11:41:52.095585  257111 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem, removing ...
	I1216 11:41:52.095599  257111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem
	I1216 11:41:52.095672  257111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem (1078 bytes)
	I1216 11:41:52.095806  257111 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem, removing ...
	I1216 11:41:52.095818  257111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem
	I1216 11:41:52.095843  257111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem (1123 bytes)
	I1216 11:41:52.095899  257111 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem, removing ...
	I1216 11:41:52.095906  257111 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem
	I1216 11:41:52.095923  257111 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem (1679 bytes)
	I1216 11:41:52.095971  257111 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-854528 san=[127.0.0.1 192.168.61.182 kubernetes-upgrade-854528 localhost minikube]
	I1216 11:41:52.261345  257111 provision.go:177] copyRemoteCerts
	I1216 11:41:52.261405  257111 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 11:41:52.261442  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:41:52.264042  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.264408  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:52.264442  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.264663  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:41:52.264871  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:52.265051  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:41:52.265209  257111 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/id_rsa Username:docker}
	I1216 11:41:52.347198  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 11:41:52.370275  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1216 11:41:52.393257  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 11:41:52.415731  257111 provision.go:87] duration metric: took 326.345153ms to configureAuth
	I1216 11:41:52.415763  257111 buildroot.go:189] setting minikube options for container-runtime
	I1216 11:41:52.415934  257111 config.go:182] Loaded profile config "kubernetes-upgrade-854528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 11:41:52.416024  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:41:52.418724  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.419212  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:52.419247  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.419447  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:41:52.419645  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:52.419818  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:52.420066  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:41:52.420228  257111 main.go:141] libmachine: Using SSH client type: native
	I1216 11:41:52.420416  257111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1216 11:41:52.420438  257111 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 11:41:52.635159  257111 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 11:41:52.635191  257111 main.go:141] libmachine: Checking connection to Docker...
	I1216 11:41:52.635200  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetURL
	I1216 11:41:52.636472  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | using libvirt version 6000000
	I1216 11:41:52.638953  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.639335  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:52.639360  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.639547  257111 main.go:141] libmachine: Docker is up and running!
	I1216 11:41:52.639561  257111 main.go:141] libmachine: Reticulating splines...
	I1216 11:41:52.639569  257111 client.go:171] duration metric: took 25.879800966s to LocalClient.Create
	I1216 11:41:52.639601  257111 start.go:167] duration metric: took 25.8798858s to libmachine.API.Create "kubernetes-upgrade-854528"
	I1216 11:41:52.639616  257111 start.go:293] postStartSetup for "kubernetes-upgrade-854528" (driver="kvm2")
	I1216 11:41:52.639631  257111 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 11:41:52.639659  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:41:52.639912  257111 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 11:41:52.639944  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:41:52.642142  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.642498  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:52.642534  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.642643  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:41:52.642827  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:52.642996  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:41:52.643171  257111 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/id_rsa Username:docker}
	I1216 11:41:52.723316  257111 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 11:41:52.727107  257111 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 11:41:52.727138  257111 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/addons for local assets ...
	I1216 11:41:52.727223  257111 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/files for local assets ...
	I1216 11:41:52.727344  257111 filesync.go:149] local asset: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem -> 2175192.pem in /etc/ssl/certs
	I1216 11:41:52.727447  257111 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 11:41:52.736391  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:41:52.759426  257111 start.go:296] duration metric: took 119.793542ms for postStartSetup
	I1216 11:41:52.759490  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetConfigRaw
	I1216 11:41:52.760107  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetIP
	I1216 11:41:52.762886  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.763320  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:52.763351  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.763578  257111 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/config.json ...
	I1216 11:41:52.763798  257111 start.go:128] duration metric: took 26.028898063s to createHost
	I1216 11:41:52.763828  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:41:52.766028  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.766364  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:52.766387  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.766577  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:41:52.766768  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:52.766948  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:52.767085  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:41:52.767247  257111 main.go:141] libmachine: Using SSH client type: native
	I1216 11:41:52.767426  257111 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1216 11:41:52.767436  257111 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 11:41:52.869416  257111 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734349312.856743697
	
	I1216 11:41:52.869445  257111 fix.go:216] guest clock: 1734349312.856743697
	I1216 11:41:52.869456  257111 fix.go:229] Guest: 2024-12-16 11:41:52.856743697 +0000 UTC Remote: 2024-12-16 11:41:52.763813489 +0000 UTC m=+61.735516653 (delta=92.930208ms)
	I1216 11:41:52.869484  257111 fix.go:200] guest clock delta is within tolerance: 92.930208ms
	I1216 11:41:52.869491  257111 start.go:83] releasing machines lock for "kubernetes-upgrade-854528", held for 26.134797326s
	I1216 11:41:52.869521  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:41:52.869834  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetIP
	I1216 11:41:52.872819  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.873310  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:52.873337  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.873613  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:41:52.874110  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:41:52.874299  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:41:52.874445  257111 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 11:41:52.874500  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:41:52.874540  257111 ssh_runner.go:195] Run: cat /version.json
	I1216 11:41:52.874568  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:41:52.877208  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.877516  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.877779  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:52.877807  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.877843  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:52.877860  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:52.877891  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:41:52.878052  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:52.878145  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:41:52.878232  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:41:52.878334  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:41:52.878425  257111 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/id_rsa Username:docker}
	I1216 11:41:52.878459  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:41:52.878595  257111 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/id_rsa Username:docker}
	I1216 11:41:52.957788  257111 ssh_runner.go:195] Run: systemctl --version
	I1216 11:41:52.977269  257111 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 11:41:53.136032  257111 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 11:41:53.142178  257111 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 11:41:53.142248  257111 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:41:53.158453  257111 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 11:41:53.158478  257111 start.go:495] detecting cgroup driver to use...
	I1216 11:41:53.158537  257111 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 11:41:53.180497  257111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 11:41:53.194607  257111 docker.go:217] disabling cri-docker service (if available) ...
	I1216 11:41:53.194682  257111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 11:41:53.208497  257111 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 11:41:53.221588  257111 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 11:41:53.347393  257111 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 11:41:53.486338  257111 docker.go:233] disabling docker service ...
	I1216 11:41:53.486412  257111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 11:41:53.501764  257111 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 11:41:53.516442  257111 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 11:41:53.669924  257111 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 11:41:53.801844  257111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 11:41:53.817223  257111 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 11:41:53.837258  257111 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 11:41:53.837359  257111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:41:53.850380  257111 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 11:41:53.850462  257111 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:41:53.865171  257111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:41:53.879176  257111 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:41:53.890668  257111 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 11:41:53.903025  257111 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 11:41:53.912648  257111 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 11:41:53.912732  257111 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 11:41:53.925356  257111 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 11:41:53.935507  257111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:41:54.056946  257111 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 11:41:54.155754  257111 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 11:41:54.155849  257111 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 11:41:54.160352  257111 start.go:563] Will wait 60s for crictl version
	I1216 11:41:54.160430  257111 ssh_runner.go:195] Run: which crictl
	I1216 11:41:54.164032  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 11:41:54.207201  257111 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 11:41:54.207273  257111 ssh_runner.go:195] Run: crio --version
	I1216 11:41:54.234671  257111 ssh_runner.go:195] Run: crio --version
	I1216 11:41:54.265118  257111 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 11:41:54.266356  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetIP
	I1216 11:41:54.268885  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:54.269290  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:41:41 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:41:54.269325  257111 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:41:54.269567  257111 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 11:41:54.273488  257111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:41:54.286381  257111 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-854528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-854528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 11:41:54.286534  257111 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 11:41:54.286607  257111 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:41:54.322370  257111 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 11:41:54.322456  257111 ssh_runner.go:195] Run: which lz4
	I1216 11:41:54.326632  257111 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 11:41:54.330558  257111 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 11:41:54.330602  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 11:41:55.895759  257111 crio.go:462] duration metric: took 1.569165412s to copy over tarball
	I1216 11:41:55.895845  257111 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 11:41:58.587023  257111 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.691148971s)
	I1216 11:41:58.587051  257111 crio.go:469] duration metric: took 2.691256899s to extract the tarball
	I1216 11:41:58.587059  257111 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 11:41:58.630033  257111 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:41:58.676323  257111 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 11:41:58.676368  257111 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 11:41:58.676470  257111 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:41:58.676505  257111 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:41:58.676531  257111 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 11:41:58.676540  257111 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:41:58.676450  257111 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:41:58.676562  257111 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 11:41:58.676582  257111 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 11:41:58.676566  257111 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:41:58.678487  257111 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:41:58.678495  257111 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 11:41:58.678551  257111 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 11:41:58.678582  257111 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:41:58.678590  257111 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 11:41:58.678598  257111 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:41:58.678591  257111 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:41:58.678562  257111 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:41:58.832112  257111 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:41:58.840057  257111 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:41:58.843274  257111 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:41:58.860441  257111 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 11:41:58.860450  257111 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:41:58.862919  257111 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 11:41:58.886057  257111 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 11:41:58.925880  257111 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 11:41:58.925980  257111 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 11:41:58.926029  257111 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:41:58.926079  257111 ssh_runner.go:195] Run: which crictl
	I1216 11:41:58.925990  257111 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:41:58.926232  257111 ssh_runner.go:195] Run: which crictl
	I1216 11:41:58.957356  257111 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 11:41:58.957420  257111 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:41:58.957475  257111 ssh_runner.go:195] Run: which crictl
	I1216 11:41:58.993315  257111 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 11:41:58.993371  257111 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:41:58.993424  257111 ssh_runner.go:195] Run: which crictl
	I1216 11:41:59.003901  257111 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 11:41:59.003958  257111 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 11:41:59.004030  257111 ssh_runner.go:195] Run: which crictl
	I1216 11:41:59.012040  257111 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 11:41:59.012089  257111 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 11:41:59.012136  257111 ssh_runner.go:195] Run: which crictl
	I1216 11:41:59.019619  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:41:59.019629  257111 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 11:41:59.019658  257111 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 11:41:59.019664  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:41:59.019687  257111 ssh_runner.go:195] Run: which crictl
	I1216 11:41:59.019700  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:41:59.019705  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:41:59.019736  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 11:41:59.019814  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 11:41:59.100505  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:41:59.168687  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:41:59.168728  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 11:41:59.168688  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 11:41:59.168821  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 11:41:59.168853  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:41:59.168894  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:41:59.168941  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:41:59.320192  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 11:41:59.320244  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:41:59.321704  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 11:41:59.325145  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 11:41:59.325199  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:41:59.325243  257111 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 11:41:59.325270  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:41:59.420413  257111 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 11:41:59.420447  257111 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 11:41:59.455667  257111 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 11:41:59.461696  257111 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 11:41:59.461762  257111 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 11:41:59.461811  257111 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 11:41:59.473068  257111 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 11:41:59.609007  257111 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:41:59.749235  257111 cache_images.go:92] duration metric: took 1.072840685s to LoadCachedImages
	W1216 11:41:59.749378  257111 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I1216 11:41:59.749396  257111 kubeadm.go:934] updating node { 192.168.61.182 8443 v1.20.0 crio true true} ...
	I1216 11:41:59.749530  257111 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-854528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-854528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 11:41:59.749604  257111 ssh_runner.go:195] Run: crio config
	I1216 11:41:59.809641  257111 cni.go:84] Creating CNI manager for ""
	I1216 11:41:59.809671  257111 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:41:59.809684  257111 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 11:41:59.809710  257111 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.182 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-854528 NodeName:kubernetes-upgrade-854528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 11:41:59.809872  257111 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-854528"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 11:41:59.809952  257111 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 11:41:59.822594  257111 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 11:41:59.822684  257111 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 11:41:59.834841  257111 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1216 11:41:59.852078  257111 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 11:41:59.869436  257111 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1216 11:41:59.888908  257111 ssh_runner.go:195] Run: grep 192.168.61.182	control-plane.minikube.internal$ /etc/hosts
	I1216 11:41:59.892718  257111 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:41:59.908633  257111 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:42:00.031432  257111 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:42:00.052791  257111 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528 for IP: 192.168.61.182
	I1216 11:42:00.052815  257111 certs.go:194] generating shared ca certs ...
	I1216 11:42:00.052839  257111 certs.go:226] acquiring lock for ca certs: {Name:mkd97e0b0fea726104cb674caf81491f1144961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:42:00.053046  257111 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key
	I1216 11:42:00.053111  257111 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key
	I1216 11:42:00.053126  257111 certs.go:256] generating profile certs ...
	I1216 11:42:00.053203  257111 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/client.key
	I1216 11:42:00.053233  257111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/client.crt with IP's: []
	I1216 11:42:00.125242  257111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/client.crt ...
	I1216 11:42:00.125273  257111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/client.crt: {Name:mkbf77b0db69fb0f8acde1ccba9a4a6fd6639d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:42:00.129408  257111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/client.key ...
	I1216 11:42:00.129447  257111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/client.key: {Name:mk98498d9748a30802e3e77f4665a4019325c7e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:42:00.129618  257111 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.key.ae3c87f7
	I1216 11:42:00.129643  257111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.crt.ae3c87f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.182]
	I1216 11:42:00.207508  257111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.crt.ae3c87f7 ...
	I1216 11:42:00.207544  257111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.crt.ae3c87f7: {Name:mk0e089e96aab1ca9e29071d9ff1f7b99accba5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:42:00.274770  257111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.key.ae3c87f7 ...
	I1216 11:42:00.274830  257111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.key.ae3c87f7: {Name:mk93bdf139e55acc728035a22200a67a43a10a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:42:00.274961  257111 certs.go:381] copying /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.crt.ae3c87f7 -> /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.crt
	I1216 11:42:00.275075  257111 certs.go:385] copying /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.key.ae3c87f7 -> /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.key
	I1216 11:42:00.275151  257111 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/proxy-client.key
	I1216 11:42:00.275173  257111 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/proxy-client.crt with IP's: []
	I1216 11:42:00.347110  257111 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/proxy-client.crt ...
	I1216 11:42:00.347143  257111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/proxy-client.crt: {Name:mk12cb536b175bd90c643dc6bb4895a71853b598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:42:00.347314  257111 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/proxy-client.key ...
	I1216 11:42:00.347328  257111 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/proxy-client.key: {Name:mk7332608d140754e303107279b29341eb794e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:42:00.347505  257111 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem (1338 bytes)
	W1216 11:42:00.347549  257111 certs.go:480] ignoring /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519_empty.pem, impossibly tiny 0 bytes
	I1216 11:42:00.347560  257111 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 11:42:00.347581  257111 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem (1078 bytes)
	I1216 11:42:00.347602  257111 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem (1123 bytes)
	I1216 11:42:00.347630  257111 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem (1679 bytes)
	I1216 11:42:00.347665  257111 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:42:00.348365  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 11:42:00.375247  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 11:42:00.406194  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 11:42:00.431631  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 11:42:00.459749  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1216 11:42:00.482477  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 11:42:00.507444  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 11:42:00.531493  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 11:42:00.555899  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem --> /usr/share/ca-certificates/217519.pem (1338 bytes)
	I1216 11:42:00.580660  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /usr/share/ca-certificates/2175192.pem (1708 bytes)
	I1216 11:42:00.606206  257111 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 11:42:00.629992  257111 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 11:42:00.648788  257111 ssh_runner.go:195] Run: openssl version
	I1216 11:42:00.654506  257111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2175192.pem && ln -fs /usr/share/ca-certificates/2175192.pem /etc/ssl/certs/2175192.pem"
	I1216 11:42:00.665568  257111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2175192.pem
	I1216 11:42:00.669896  257111 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 10:42 /usr/share/ca-certificates/2175192.pem
	I1216 11:42:00.669963  257111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2175192.pem
	I1216 11:42:00.675674  257111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2175192.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 11:42:00.686576  257111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 11:42:00.700130  257111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:42:00.704649  257111 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:42:00.704709  257111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:42:00.710341  257111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 11:42:00.720934  257111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/217519.pem && ln -fs /usr/share/ca-certificates/217519.pem /etc/ssl/certs/217519.pem"
	I1216 11:42:00.731463  257111 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/217519.pem
	I1216 11:42:00.735878  257111 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 10:42 /usr/share/ca-certificates/217519.pem
	I1216 11:42:00.735946  257111 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/217519.pem
	I1216 11:42:00.741741  257111 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/217519.pem /etc/ssl/certs/51391683.0"
	I1216 11:42:00.755302  257111 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 11:42:00.760544  257111 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 11:42:00.760614  257111 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-854528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-854528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:42:00.760773  257111 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 11:42:00.760834  257111 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:42:00.799165  257111 cri.go:89] found id: ""
	I1216 11:42:00.799262  257111 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 11:42:00.812303  257111 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:42:00.824909  257111 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:42:00.837339  257111 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:42:00.837367  257111 kubeadm.go:157] found existing configuration files:
	
	I1216 11:42:00.837424  257111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:42:00.848931  257111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:42:00.849014  257111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:42:00.864382  257111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:42:00.880545  257111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:42:00.880619  257111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:42:00.896071  257111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:42:00.909795  257111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:42:00.909857  257111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:42:00.930184  257111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:42:00.940374  257111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:42:00.940441  257111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:42:00.954401  257111 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 11:42:01.075699  257111 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 11:42:01.075773  257111 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 11:42:01.217666  257111 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 11:42:01.217827  257111 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 11:42:01.217956  257111 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 11:42:01.393989  257111 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 11:42:01.479916  257111 out.go:235]   - Generating certificates and keys ...
	I1216 11:42:01.480049  257111 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 11:42:01.480155  257111 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 11:42:01.532488  257111 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 11:42:01.954525  257111 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 11:42:02.153531  257111 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 11:42:02.291064  257111 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 11:42:02.433619  257111 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 11:42:02.433868  257111 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-854528 localhost] and IPs [192.168.61.182 127.0.0.1 ::1]
	I1216 11:42:02.570351  257111 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 11:42:02.570504  257111 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-854528 localhost] and IPs [192.168.61.182 127.0.0.1 ::1]
	I1216 11:42:02.931154  257111 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 11:42:03.026707  257111 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 11:42:03.124673  257111 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 11:42:03.124922  257111 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 11:42:03.182537  257111 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 11:42:03.469919  257111 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 11:42:03.710738  257111 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 11:42:03.921072  257111 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 11:42:03.936250  257111 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 11:42:03.937332  257111 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 11:42:03.937400  257111 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 11:42:04.086063  257111 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 11:42:04.088810  257111 out.go:235]   - Booting up control plane ...
	I1216 11:42:04.088947  257111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 11:42:04.092263  257111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 11:42:04.094629  257111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 11:42:04.095607  257111 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 11:42:04.100498  257111 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 11:42:44.099260  257111 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 11:42:44.101880  257111 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:42:44.102126  257111 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:42:49.102808  257111 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:42:49.103113  257111 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:42:59.103635  257111 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:42:59.103862  257111 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:43:19.105009  257111 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:43:19.105206  257111 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:43:59.104478  257111 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:43:59.104680  257111 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:43:59.104703  257111 kubeadm.go:310] 
	I1216 11:43:59.104777  257111 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 11:43:59.104870  257111 kubeadm.go:310] 		timed out waiting for the condition
	I1216 11:43:59.104891  257111 kubeadm.go:310] 
	I1216 11:43:59.104964  257111 kubeadm.go:310] 	This error is likely caused by:
	I1216 11:43:59.105011  257111 kubeadm.go:310] 		- The kubelet is not running
	I1216 11:43:59.105147  257111 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 11:43:59.105169  257111 kubeadm.go:310] 
	I1216 11:43:59.105331  257111 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 11:43:59.105397  257111 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 11:43:59.105452  257111 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 11:43:59.105466  257111 kubeadm.go:310] 
	I1216 11:43:59.105617  257111 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 11:43:59.105751  257111 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 11:43:59.105773  257111 kubeadm.go:310] 
	I1216 11:43:59.105913  257111 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 11:43:59.106021  257111 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 11:43:59.106131  257111 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 11:43:59.106277  257111 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 11:43:59.106318  257111 kubeadm.go:310] 
	I1216 11:43:59.106473  257111 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 11:43:59.106574  257111 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 11:43:59.106707  257111 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1216 11:43:59.106852  257111 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-854528 localhost] and IPs [192.168.61.182 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-854528 localhost] and IPs [192.168.61.182 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-854528 localhost] and IPs [192.168.61.182 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-854528 localhost] and IPs [192.168.61.182 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 11:43:59.106915  257111 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 11:44:00.317545  257111 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.210600535s)
	I1216 11:44:00.317642  257111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:44:00.335732  257111 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:44:00.349355  257111 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:44:00.349381  257111 kubeadm.go:157] found existing configuration files:
	
	I1216 11:44:00.349433  257111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:44:00.363153  257111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:44:00.363215  257111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:44:00.374144  257111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:44:00.385040  257111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:44:00.385107  257111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:44:00.398236  257111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:44:00.410970  257111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:44:00.411035  257111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:44:00.423616  257111 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:44:00.436388  257111 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:44:00.436462  257111 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:44:00.449512  257111 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 11:44:00.534261  257111 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 11:44:00.534382  257111 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 11:44:00.697552  257111 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 11:44:00.697663  257111 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 11:44:00.697769  257111 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 11:44:00.903696  257111 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 11:44:00.906523  257111 out.go:235]   - Generating certificates and keys ...
	I1216 11:44:00.906643  257111 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 11:44:00.906713  257111 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 11:44:00.906831  257111 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 11:44:00.906929  257111 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 11:44:00.907029  257111 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 11:44:00.907107  257111 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 11:44:00.907200  257111 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 11:44:00.907287  257111 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 11:44:00.907429  257111 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 11:44:00.907548  257111 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 11:44:00.907616  257111 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 11:44:00.907718  257111 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 11:44:01.302650  257111 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 11:44:01.579855  257111 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 11:44:01.899062  257111 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 11:44:02.083350  257111 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 11:44:02.102459  257111 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 11:44:02.104125  257111 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 11:44:02.104215  257111 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 11:44:02.257222  257111 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 11:44:02.259198  257111 out.go:235]   - Booting up control plane ...
	I1216 11:44:02.259335  257111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 11:44:02.273842  257111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 11:44:02.276750  257111 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 11:44:02.278131  257111 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 11:44:02.280708  257111 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 11:44:42.282266  257111 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 11:44:42.282438  257111 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:44:42.282630  257111 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:44:47.282876  257111 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:44:47.283094  257111 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:44:57.283603  257111 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:44:57.283941  257111 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:45:17.284623  257111 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:45:17.284841  257111 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:45:57.284170  257111 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:45:57.284488  257111 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:45:57.284515  257111 kubeadm.go:310] 
	I1216 11:45:57.284571  257111 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 11:45:57.284630  257111 kubeadm.go:310] 		timed out waiting for the condition
	I1216 11:45:57.284638  257111 kubeadm.go:310] 
	I1216 11:45:57.284683  257111 kubeadm.go:310] 	This error is likely caused by:
	I1216 11:45:57.284751  257111 kubeadm.go:310] 		- The kubelet is not running
	I1216 11:45:57.284909  257111 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 11:45:57.284922  257111 kubeadm.go:310] 
	I1216 11:45:57.285092  257111 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 11:45:57.285152  257111 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 11:45:57.285210  257111 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 11:45:57.285222  257111 kubeadm.go:310] 
	I1216 11:45:57.285368  257111 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 11:45:57.285485  257111 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 11:45:57.285502  257111 kubeadm.go:310] 
	I1216 11:45:57.285764  257111 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 11:45:57.285918  257111 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 11:45:57.286032  257111 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 11:45:57.286132  257111 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 11:45:57.286145  257111 kubeadm.go:310] 
	I1216 11:45:57.286881  257111 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 11:45:57.287006  257111 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 11:45:57.287099  257111 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 11:45:57.287176  257111 kubeadm.go:394] duration metric: took 3m56.526567512s to StartCluster
	I1216 11:45:57.287235  257111 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:45:57.287305  257111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:45:57.333551  257111 cri.go:89] found id: ""
	I1216 11:45:57.333583  257111 logs.go:282] 0 containers: []
	W1216 11:45:57.333593  257111 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:45:57.333601  257111 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:45:57.333675  257111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:45:57.369638  257111 cri.go:89] found id: ""
	I1216 11:45:57.369698  257111 logs.go:282] 0 containers: []
	W1216 11:45:57.369711  257111 logs.go:284] No container was found matching "etcd"
	I1216 11:45:57.369720  257111 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:45:57.369789  257111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:45:57.409773  257111 cri.go:89] found id: ""
	I1216 11:45:57.409805  257111 logs.go:282] 0 containers: []
	W1216 11:45:57.409816  257111 logs.go:284] No container was found matching "coredns"
	I1216 11:45:57.409824  257111 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:45:57.409886  257111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:45:57.456755  257111 cri.go:89] found id: ""
	I1216 11:45:57.456793  257111 logs.go:282] 0 containers: []
	W1216 11:45:57.456806  257111 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:45:57.456815  257111 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:45:57.456884  257111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:45:57.495671  257111 cri.go:89] found id: ""
	I1216 11:45:57.495700  257111 logs.go:282] 0 containers: []
	W1216 11:45:57.495708  257111 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:45:57.495714  257111 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:45:57.495768  257111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:45:57.544389  257111 cri.go:89] found id: ""
	I1216 11:45:57.544421  257111 logs.go:282] 0 containers: []
	W1216 11:45:57.544433  257111 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:45:57.544442  257111 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:45:57.544504  257111 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:45:57.600838  257111 cri.go:89] found id: ""
	I1216 11:45:57.600873  257111 logs.go:282] 0 containers: []
	W1216 11:45:57.600886  257111 logs.go:284] No container was found matching "kindnet"
	I1216 11:45:57.600904  257111 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:45:57.600924  257111 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:45:57.774187  257111 logs.go:123] Gathering logs for container status ...
	I1216 11:45:57.774250  257111 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:45:57.853237  257111 logs.go:123] Gathering logs for kubelet ...
	I1216 11:45:57.853279  257111 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:45:57.934930  257111 logs.go:123] Gathering logs for dmesg ...
	I1216 11:45:57.934991  257111 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:45:57.954656  257111 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:45:57.954704  257111 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:45:58.153659  257111 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1216 11:45:58.153696  257111 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 11:45:58.153756  257111 out.go:270] * 
	* 
	W1216 11:45:58.153820  257111 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 11:45:58.153841  257111 out.go:270] * 
	* 
	W1216 11:45:58.155096  257111 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 11:45:58.159768  257111 out.go:201] 
	W1216 11:45:58.161617  257111 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 11:45:58.161685  257111 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 11:45:58.161714  257111 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 11:45:58.163174  257111 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-854528 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-854528
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-854528: (6.389504054s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-854528 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-854528 status --format={{.Host}}: exit status 7 (88.735864ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854528 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1216 11:46:06.844341  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-854528 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.604477535s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-854528 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854528 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-854528 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (129.139361ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-854528] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-854528
	    minikube start -p kubernetes-upgrade-854528 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8545282 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-854528 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-854528 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-854528 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.364991187s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-16 11:48:05.902560542 +0000 UTC m=+4607.996510841
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-854528 -n kubernetes-upgrade-854528
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-854528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-854528 logs -n 25: (1.623773645s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939 sudo cat                | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939 sudo cat                | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939 sudo cat                | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-560939                         | enable-default-cni-560939 | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC | 16 Dec 24 11:47 UTC |
	| start   | -p old-k8s-version-933974                            | old-k8s-version-933974    | jenkins | v1.34.0 | 16 Dec 24 11:47 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --kvm-network=default                                |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                           |         |         |                     |                     |
	|         | --keep-context=false                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:47:37
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:47:37.825024  269561 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:47:37.825176  269561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:47:37.825186  269561 out.go:358] Setting ErrFile to fd 2...
	I1216 11:47:37.825191  269561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:47:37.825396  269561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:47:37.826075  269561 out.go:352] Setting JSON to false
	I1216 11:47:37.827220  269561 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12605,"bootTime":1734337053,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 11:47:37.827328  269561 start.go:139] virtualization: kvm guest
	I1216 11:47:37.829287  269561 out.go:177] * [old-k8s-version-933974] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 11:47:37.830683  269561 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:47:37.830738  269561 notify.go:220] Checking for updates...
	I1216 11:47:37.832934  269561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:47:37.834070  269561 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:47:37.835212  269561 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:47:37.836246  269561 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 11:47:37.837358  269561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:47:37.838978  269561 config.go:182] Loaded profile config "bridge-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:47:37.839067  269561 config.go:182] Loaded profile config "flannel-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:47:37.839139  269561 config.go:182] Loaded profile config "kubernetes-upgrade-854528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:47:37.839225  269561 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:47:37.875978  269561 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 11:47:37.877114  269561 start.go:297] selected driver: kvm2
	I1216 11:47:37.877130  269561 start.go:901] validating driver "kvm2" against <nil>
	I1216 11:47:37.877142  269561 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:47:37.877979  269561 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:47:37.878086  269561 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 11:47:37.894314  269561 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 11:47:37.894381  269561 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 11:47:37.894650  269561 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 11:47:37.894685  269561 cni.go:84] Creating CNI manager for ""
	I1216 11:47:37.894728  269561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:47:37.894737  269561 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 11:47:37.894807  269561 start.go:340] cluster config:
	{Name:old-k8s-version-933974 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-933974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:47:37.894904  269561 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:47:37.896669  269561 out.go:177] * Starting "old-k8s-version-933974" primary control-plane node in "old-k8s-version-933974" cluster
	I1216 11:47:37.603949  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:37.604514  266763 main.go:141] libmachine: (flannel-560939) found domain IP: 192.168.39.160
	I1216 11:47:37.604542  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has current primary IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:37.604551  266763 main.go:141] libmachine: (flannel-560939) reserving static IP address...
	I1216 11:47:37.604865  266763 main.go:141] libmachine: (flannel-560939) DBG | unable to find host DHCP lease matching {name: "flannel-560939", mac: "52:54:00:20:5d:ed", ip: "192.168.39.160"} in network mk-flannel-560939
	I1216 11:47:37.686042  266763 main.go:141] libmachine: (flannel-560939) reserved static IP address 192.168.39.160 for domain flannel-560939
	I1216 11:47:37.686071  266763 main.go:141] libmachine: (flannel-560939) waiting for SSH...
	I1216 11:47:37.686082  266763 main.go:141] libmachine: (flannel-560939) DBG | Getting to WaitForSSH function...
	I1216 11:47:37.689552  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:37.689871  266763 main.go:141] libmachine: (flannel-560939) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939
	I1216 11:47:37.689900  266763 main.go:141] libmachine: (flannel-560939) DBG | unable to find defined IP address of network mk-flannel-560939 interface with MAC address 52:54:00:20:5d:ed
	I1216 11:47:37.690055  266763 main.go:141] libmachine: (flannel-560939) DBG | Using SSH client type: external
	I1216 11:47:37.690100  266763 main.go:141] libmachine: (flannel-560939) DBG | Using SSH private key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/flannel-560939/id_rsa (-rw-------)
	I1216 11:47:37.690140  266763 main.go:141] libmachine: (flannel-560939) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20107-210204/.minikube/machines/flannel-560939/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 11:47:37.690161  266763 main.go:141] libmachine: (flannel-560939) DBG | About to run SSH command:
	I1216 11:47:37.690175  266763 main.go:141] libmachine: (flannel-560939) DBG | exit 0
	I1216 11:47:37.694186  266763 main.go:141] libmachine: (flannel-560939) DBG | SSH cmd err, output: exit status 255: 
	I1216 11:47:37.694206  266763 main.go:141] libmachine: (flannel-560939) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1216 11:47:37.694213  266763 main.go:141] libmachine: (flannel-560939) DBG | command : exit 0
	I1216 11:47:37.694218  266763 main.go:141] libmachine: (flannel-560939) DBG | err     : exit status 255
	I1216 11:47:37.694224  266763 main.go:141] libmachine: (flannel-560939) DBG | output  : 
	I1216 11:47:42.065544  267586 start.go:364] duration metric: took 21.332733484s to acquireMachinesLock for "kubernetes-upgrade-854528"
	I1216 11:47:42.065598  267586 start.go:96] Skipping create...Using existing machine configuration
	I1216 11:47:42.065606  267586 fix.go:54] fixHost starting: 
	I1216 11:47:42.066081  267586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:47:42.066139  267586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:47:42.084673  267586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I1216 11:47:42.085233  267586 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:47:42.085785  267586 main.go:141] libmachine: Using API Version  1
	I1216 11:47:42.085808  267586 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:47:42.086120  267586 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:47:42.086310  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:47:42.086488  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetState
	I1216 11:47:42.088115  267586 fix.go:112] recreateIfNeeded on kubernetes-upgrade-854528: state=Running err=<nil>
	W1216 11:47:42.088138  267586 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 11:47:42.090032  267586 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-854528" VM ...
	I1216 11:47:37.897825  269561 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 11:47:37.897861  269561 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 11:47:37.897868  269561 cache.go:56] Caching tarball of preloaded images
	I1216 11:47:37.897945  269561 preload.go:172] Found /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 11:47:37.897957  269561 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 11:47:37.898052  269561 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/config.json ...
	I1216 11:47:37.898070  269561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/config.json: {Name:mkfe4215b798120ec67203e9963a4936f9ecd548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:47:37.898191  269561 start.go:360] acquireMachinesLock for old-k8s-version-933974: {Name:mk416695f38f04563a2ca155c5383ee4d8f0f97b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 11:47:40.695168  266763 main.go:141] libmachine: (flannel-560939) DBG | Getting to WaitForSSH function...
	I1216 11:47:40.698085  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:40.698616  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:40.698652  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:40.698807  266763 main.go:141] libmachine: (flannel-560939) DBG | Using SSH client type: external
	I1216 11:47:40.698832  266763 main.go:141] libmachine: (flannel-560939) DBG | Using SSH private key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/flannel-560939/id_rsa (-rw-------)
	I1216 11:47:40.698870  266763 main.go:141] libmachine: (flannel-560939) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.160 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20107-210204/.minikube/machines/flannel-560939/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 11:47:40.698887  266763 main.go:141] libmachine: (flannel-560939) DBG | About to run SSH command:
	I1216 11:47:40.698901  266763 main.go:141] libmachine: (flannel-560939) DBG | exit 0
	I1216 11:47:40.821038  266763 main.go:141] libmachine: (flannel-560939) DBG | SSH cmd err, output: <nil>: 
	I1216 11:47:40.821328  266763 main.go:141] libmachine: (flannel-560939) KVM machine creation complete
	I1216 11:47:40.821642  266763 main.go:141] libmachine: (flannel-560939) Calling .GetConfigRaw
	I1216 11:47:40.822272  266763 main.go:141] libmachine: (flannel-560939) Calling .DriverName
	I1216 11:47:40.822479  266763 main.go:141] libmachine: (flannel-560939) Calling .DriverName
	I1216 11:47:40.822641  266763 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1216 11:47:40.822658  266763 main.go:141] libmachine: (flannel-560939) Calling .GetState
	I1216 11:47:40.823912  266763 main.go:141] libmachine: Detecting operating system of created instance...
	I1216 11:47:40.823926  266763 main.go:141] libmachine: Waiting for SSH to be available...
	I1216 11:47:40.823931  266763 main.go:141] libmachine: Getting to WaitForSSH function...
	I1216 11:47:40.823937  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHHostname
	I1216 11:47:40.826213  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:40.826620  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:40.826650  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:40.826794  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHPort
	I1216 11:47:40.827001  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:40.827175  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:40.827328  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHUsername
	I1216 11:47:40.827554  266763 main.go:141] libmachine: Using SSH client type: native
	I1216 11:47:40.827826  266763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1216 11:47:40.827840  266763 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1216 11:47:40.924256  266763 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:47:40.924285  266763 main.go:141] libmachine: Detecting the provisioner...
	I1216 11:47:40.924296  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHHostname
	I1216 11:47:40.927161  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:40.927518  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:40.927547  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:40.927720  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHPort
	I1216 11:47:40.927956  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:40.928119  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:40.928268  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHUsername
	I1216 11:47:40.928472  266763 main.go:141] libmachine: Using SSH client type: native
	I1216 11:47:40.928651  266763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1216 11:47:40.928661  266763 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1216 11:47:41.025794  266763 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1216 11:47:41.025858  266763 main.go:141] libmachine: found compatible host: buildroot
	I1216 11:47:41.025869  266763 main.go:141] libmachine: Provisioning with buildroot...
	I1216 11:47:41.025877  266763 main.go:141] libmachine: (flannel-560939) Calling .GetMachineName
	I1216 11:47:41.026147  266763 buildroot.go:166] provisioning hostname "flannel-560939"
	I1216 11:47:41.026177  266763 main.go:141] libmachine: (flannel-560939) Calling .GetMachineName
	I1216 11:47:41.026377  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHHostname
	I1216 11:47:41.029728  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.030121  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:41.030158  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.030324  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHPort
	I1216 11:47:41.030553  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:41.030760  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:41.030937  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHUsername
	I1216 11:47:41.031096  266763 main.go:141] libmachine: Using SSH client type: native
	I1216 11:47:41.031295  266763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1216 11:47:41.031316  266763 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-560939 && echo "flannel-560939" | sudo tee /etc/hostname
	I1216 11:47:41.142446  266763 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-560939
	
	I1216 11:47:41.142495  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHHostname
	I1216 11:47:41.145369  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.145745  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:41.145771  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.145989  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHPort
	I1216 11:47:41.146193  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:41.146410  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:41.146609  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHUsername
	I1216 11:47:41.146786  266763 main.go:141] libmachine: Using SSH client type: native
	I1216 11:47:41.146974  266763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1216 11:47:41.146996  266763 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-560939' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-560939/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-560939' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 11:47:41.254029  266763 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:47:41.254065  266763 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20107-210204/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-210204/.minikube}
	I1216 11:47:41.254096  266763 buildroot.go:174] setting up certificates
	I1216 11:47:41.254106  266763 provision.go:84] configureAuth start
	I1216 11:47:41.254116  266763 main.go:141] libmachine: (flannel-560939) Calling .GetMachineName
	I1216 11:47:41.254450  266763 main.go:141] libmachine: (flannel-560939) Calling .GetIP
	I1216 11:47:41.257474  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.257907  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:41.257953  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.258156  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHHostname
	I1216 11:47:41.260500  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.260812  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:41.260834  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.260949  266763 provision.go:143] copyHostCerts
	I1216 11:47:41.261025  266763 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem, removing ...
	I1216 11:47:41.261039  266763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem
	I1216 11:47:41.261114  266763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem (1123 bytes)
	I1216 11:47:41.261246  266763 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem, removing ...
	I1216 11:47:41.261260  266763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem
	I1216 11:47:41.261299  266763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem (1679 bytes)
	I1216 11:47:41.261381  266763 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem, removing ...
	I1216 11:47:41.261391  266763 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem
	I1216 11:47:41.261430  266763 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem (1078 bytes)
	I1216 11:47:41.261499  266763 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem org=jenkins.flannel-560939 san=[127.0.0.1 192.168.39.160 flannel-560939 localhost minikube]
	I1216 11:47:41.466668  266763 provision.go:177] copyRemoteCerts
	I1216 11:47:41.466736  266763 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 11:47:41.466763  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHHostname
	I1216 11:47:41.469810  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.470176  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:41.470207  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.470342  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHPort
	I1216 11:47:41.470561  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:41.470702  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHUsername
	I1216 11:47:41.470833  266763 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/flannel-560939/id_rsa Username:docker}
	I1216 11:47:41.546590  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 11:47:41.573226  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 11:47:41.599724  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 11:47:41.625019  266763 provision.go:87] duration metric: took 370.896758ms to configureAuth
	I1216 11:47:41.625058  266763 buildroot.go:189] setting minikube options for container-runtime
	I1216 11:47:41.625255  266763 config.go:182] Loaded profile config "flannel-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:47:41.625356  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHHostname
	I1216 11:47:41.628071  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.628446  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:41.628487  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.628739  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHPort
	I1216 11:47:41.628950  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:41.629134  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:41.629334  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHUsername
	I1216 11:47:41.629505  266763 main.go:141] libmachine: Using SSH client type: native
	I1216 11:47:41.629742  266763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1216 11:47:41.629763  266763 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 11:47:41.839319  266763 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 11:47:41.839351  266763 main.go:141] libmachine: Checking connection to Docker...
	I1216 11:47:41.839362  266763 main.go:141] libmachine: (flannel-560939) Calling .GetURL
	I1216 11:47:41.840707  266763 main.go:141] libmachine: (flannel-560939) DBG | using libvirt version 6000000
	I1216 11:47:41.842647  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.842982  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:41.843011  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.843145  266763 main.go:141] libmachine: Docker is up and running!
	I1216 11:47:41.843160  266763 main.go:141] libmachine: Reticulating splines...
	I1216 11:47:41.843171  266763 client.go:171] duration metric: took 26.305002087s to LocalClient.Create
	I1216 11:47:41.843205  266763 start.go:167] duration metric: took 26.30508123s to libmachine.API.Create "flannel-560939"
	I1216 11:47:41.843223  266763 start.go:293] postStartSetup for "flannel-560939" (driver="kvm2")
	I1216 11:47:41.843249  266763 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 11:47:41.843275  266763 main.go:141] libmachine: (flannel-560939) Calling .DriverName
	I1216 11:47:41.843509  266763 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 11:47:41.843535  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHHostname
	I1216 11:47:41.845533  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.845936  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:41.845962  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.846096  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHPort
	I1216 11:47:41.846271  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:41.846429  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHUsername
	I1216 11:47:41.846553  266763 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/flannel-560939/id_rsa Username:docker}
	I1216 11:47:41.923089  266763 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 11:47:41.927340  266763 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 11:47:41.927369  266763 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/addons for local assets ...
	I1216 11:47:41.927434  266763 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/files for local assets ...
	I1216 11:47:41.927533  266763 filesync.go:149] local asset: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem -> 2175192.pem in /etc/ssl/certs
	I1216 11:47:41.927649  266763 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 11:47:41.937005  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:47:41.959379  266763 start.go:296] duration metric: took 116.118926ms for postStartSetup
	I1216 11:47:41.959436  266763 main.go:141] libmachine: (flannel-560939) Calling .GetConfigRaw
	I1216 11:47:41.960127  266763 main.go:141] libmachine: (flannel-560939) Calling .GetIP
	I1216 11:47:41.962742  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.963126  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:41.963159  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.963448  266763 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/config.json ...
	I1216 11:47:41.963684  266763 start.go:128] duration metric: took 26.453301673s to createHost
	I1216 11:47:41.963715  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHHostname
	I1216 11:47:41.966538  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.966956  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:41.966987  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:41.967170  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHPort
	I1216 11:47:41.967385  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:41.967597  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:41.967788  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHUsername
	I1216 11:47:41.967976  266763 main.go:141] libmachine: Using SSH client type: native
	I1216 11:47:41.968223  266763 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.160 22 <nil> <nil>}
	I1216 11:47:41.968244  266763 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 11:47:42.065380  266763 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734349662.049942643
	
	I1216 11:47:42.065411  266763 fix.go:216] guest clock: 1734349662.049942643
	I1216 11:47:42.065422  266763 fix.go:229] Guest: 2024-12-16 11:47:42.049942643 +0000 UTC Remote: 2024-12-16 11:47:41.963700496 +0000 UTC m=+26.610635054 (delta=86.242147ms)
	I1216 11:47:42.065451  266763 fix.go:200] guest clock delta is within tolerance: 86.242147ms
	I1216 11:47:42.065458  266763 start.go:83] releasing machines lock for "flannel-560939", held for 26.555221744s
	I1216 11:47:42.065490  266763 main.go:141] libmachine: (flannel-560939) Calling .DriverName
	I1216 11:47:42.065795  266763 main.go:141] libmachine: (flannel-560939) Calling .GetIP
	I1216 11:47:42.068378  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:42.068769  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:42.068797  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:42.069018  266763 main.go:141] libmachine: (flannel-560939) Calling .DriverName
	I1216 11:47:42.069636  266763 main.go:141] libmachine: (flannel-560939) Calling .DriverName
	I1216 11:47:42.069851  266763 main.go:141] libmachine: (flannel-560939) Calling .DriverName
	I1216 11:47:42.069961  266763 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 11:47:42.070005  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHHostname
	I1216 11:47:42.070119  266763 ssh_runner.go:195] Run: cat /version.json
	I1216 11:47:42.070161  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHHostname
	I1216 11:47:42.072985  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:42.073299  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:42.073374  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:42.073412  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:42.073709  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHPort
	I1216 11:47:42.073773  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:42.073807  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:42.073887  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:42.073992  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHPort
	I1216 11:47:42.074059  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHUsername
	I1216 11:47:42.074177  266763 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/flannel-560939/id_rsa Username:docker}
	I1216 11:47:42.077085  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHKeyPath
	I1216 11:47:42.077275  266763 main.go:141] libmachine: (flannel-560939) Calling .GetSSHUsername
	I1216 11:47:42.077459  266763 sshutil.go:53] new ssh client: &{IP:192.168.39.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/flannel-560939/id_rsa Username:docker}
	I1216 11:47:42.170451  266763 ssh_runner.go:195] Run: systemctl --version
	I1216 11:47:42.176416  266763 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 11:47:42.333220  266763 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 11:47:42.338975  266763 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 11:47:42.339047  266763 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:47:42.355606  266763 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 11:47:42.355634  266763 start.go:495] detecting cgroup driver to use...
	I1216 11:47:42.355699  266763 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 11:47:42.375528  266763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 11:47:42.392313  266763 docker.go:217] disabling cri-docker service (if available) ...
	I1216 11:47:42.392404  266763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 11:47:42.407881  266763 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 11:47:42.420929  266763 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 11:47:42.536793  266763 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 11:47:42.691950  266763 docker.go:233] disabling docker service ...
	I1216 11:47:42.692022  266763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 11:47:42.707235  266763 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 11:47:42.723234  266763 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 11:47:42.861609  266763 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 11:47:43.003244  266763 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 11:47:43.017882  266763 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 11:47:43.039717  266763 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 11:47:43.039781  266763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:43.050397  266763 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 11:47:43.050478  266763 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:43.060829  266763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:43.071102  266763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:43.082623  266763 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 11:47:43.093043  266763 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:43.103033  266763 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:43.121303  266763 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:43.131591  266763 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 11:47:43.140749  266763 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 11:47:43.140807  266763 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 11:47:43.153557  266763 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 11:47:43.163074  266763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:47:43.268941  266763 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 11:47:43.362770  266763 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 11:47:43.362859  266763 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 11:47:43.367182  266763 start.go:563] Will wait 60s for crictl version
	I1216 11:47:43.367244  266763 ssh_runner.go:195] Run: which crictl
	I1216 11:47:43.370805  266763 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 11:47:43.405897  266763 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 11:47:43.406009  266763 ssh_runner.go:195] Run: crio --version
	I1216 11:47:43.434484  266763 ssh_runner.go:195] Run: crio --version
	I1216 11:47:43.462929  266763 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1216 11:47:43.464094  266763 main.go:141] libmachine: (flannel-560939) Calling .GetIP
	I1216 11:47:43.466875  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:43.467261  266763 main.go:141] libmachine: (flannel-560939) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:5d:ed", ip: ""} in network mk-flannel-560939: {Iface:virbr1 ExpiryTime:2024-12-16 12:47:31 +0000 UTC Type:0 Mac:52:54:00:20:5d:ed Iaid: IPaddr:192.168.39.160 Prefix:24 Hostname:flannel-560939 Clientid:01:52:54:00:20:5d:ed}
	I1216 11:47:43.467284  266763 main.go:141] libmachine: (flannel-560939) DBG | domain flannel-560939 has defined IP address 192.168.39.160 and MAC address 52:54:00:20:5d:ed in network mk-flannel-560939
	I1216 11:47:43.467476  266763 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 11:47:43.471727  266763 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:47:43.484066  266763 kubeadm.go:883] updating cluster {Name:flannel-560939 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:flannel-560939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 11:47:43.484213  266763 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:47:43.484274  266763 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:47:43.514194  266763 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1216 11:47:43.514275  266763 ssh_runner.go:195] Run: which lz4
	I1216 11:47:43.518310  266763 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 11:47:43.522800  266763 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 11:47:43.522828  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1216 11:47:44.812517  266763 crio.go:462] duration metric: took 1.294238461s to copy over tarball
	I1216 11:47:44.812607  266763 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 11:47:42.091613  267586 machine.go:93] provisionDockerMachine start ...
	I1216 11:47:42.091644  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:47:42.091886  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:47:42.094662  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.095130  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:42.095161  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.095343  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:47:42.095536  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:42.095705  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:42.095887  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:47:42.096075  267586 main.go:141] libmachine: Using SSH client type: native
	I1216 11:47:42.096326  267586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1216 11:47:42.096338  267586 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 11:47:42.210392  267586 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-854528
	
	I1216 11:47:42.210446  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetMachineName
	I1216 11:47:42.210747  267586 buildroot.go:166] provisioning hostname "kubernetes-upgrade-854528"
	I1216 11:47:42.210779  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetMachineName
	I1216 11:47:42.211004  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:47:42.213654  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.213990  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:42.214016  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.214121  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:47:42.214318  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:42.214500  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:42.214752  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:47:42.214925  267586 main.go:141] libmachine: Using SSH client type: native
	I1216 11:47:42.215137  267586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1216 11:47:42.215159  267586 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-854528 && echo "kubernetes-upgrade-854528" | sudo tee /etc/hostname
	I1216 11:47:42.345613  267586 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-854528
	
	I1216 11:47:42.345652  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:47:42.348521  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.348967  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:42.349006  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.349251  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:47:42.349468  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:42.349631  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:42.349811  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:47:42.350023  267586 main.go:141] libmachine: Using SSH client type: native
	I1216 11:47:42.350300  267586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1216 11:47:42.350322  267586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-854528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-854528/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-854528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 11:47:42.466021  267586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:47:42.466072  267586 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20107-210204/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-210204/.minikube}
	I1216 11:47:42.466127  267586 buildroot.go:174] setting up certificates
	I1216 11:47:42.466138  267586 provision.go:84] configureAuth start
	I1216 11:47:42.466150  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetMachineName
	I1216 11:47:42.466555  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetIP
	I1216 11:47:42.469493  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.469939  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:42.469969  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.470143  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:47:42.472633  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.473029  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:42.473058  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.473229  267586 provision.go:143] copyHostCerts
	I1216 11:47:42.473299  267586 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem, removing ...
	I1216 11:47:42.473312  267586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem
	I1216 11:47:42.473374  267586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem (1078 bytes)
	I1216 11:47:42.473503  267586 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem, removing ...
	I1216 11:47:42.473517  267586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem
	I1216 11:47:42.473551  267586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem (1123 bytes)
	I1216 11:47:42.473644  267586 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem, removing ...
	I1216 11:47:42.473655  267586 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem
	I1216 11:47:42.473683  267586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem (1679 bytes)
	I1216 11:47:42.473767  267586 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-854528 san=[127.0.0.1 192.168.61.182 kubernetes-upgrade-854528 localhost minikube]
	I1216 11:47:42.608513  267586 provision.go:177] copyRemoteCerts
	I1216 11:47:42.608586  267586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 11:47:42.608625  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:47:42.611564  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.611887  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:42.611920  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.612133  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:47:42.612367  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:42.612550  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:47:42.612705  267586 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/id_rsa Username:docker}
	I1216 11:47:42.704854  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 11:47:42.730999  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 11:47:42.761857  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1216 11:47:42.787753  267586 provision.go:87] duration metric: took 321.600344ms to configureAuth
	I1216 11:47:42.787788  267586 buildroot.go:189] setting minikube options for container-runtime
	I1216 11:47:42.787981  267586 config.go:182] Loaded profile config "kubernetes-upgrade-854528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:47:42.788077  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:47:42.791163  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.791557  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:42.791594  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:42.791786  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:47:42.791978  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:42.792160  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:42.792299  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:47:42.792459  267586 main.go:141] libmachine: Using SSH client type: native
	I1216 11:47:42.792694  267586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1216 11:47:42.792711  267586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 11:47:46.986700  266763 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.174047702s)
	I1216 11:47:46.986733  266763 crio.go:469] duration metric: took 2.174179824s to extract the tarball
	I1216 11:47:46.986741  266763 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 11:47:47.022451  266763 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:47:47.060836  266763 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 11:47:47.060861  266763 cache_images.go:84] Images are preloaded, skipping loading
	I1216 11:47:47.060871  266763 kubeadm.go:934] updating node { 192.168.39.160 8443 v1.31.2 crio true true} ...
	I1216 11:47:47.060998  266763 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-560939 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:flannel-560939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1216 11:47:47.061069  266763 ssh_runner.go:195] Run: crio config
	I1216 11:47:47.106975  266763 cni.go:84] Creating CNI manager for "flannel"
	I1216 11:47:47.107010  266763 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 11:47:47.107036  266763 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.160 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-560939 NodeName:flannel-560939 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 11:47:47.107165  266763 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-560939"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.160"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.160"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 11:47:47.107230  266763 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1216 11:47:47.116829  266763 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 11:47:47.116923  266763 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 11:47:47.125968  266763 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1216 11:47:47.142009  266763 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 11:47:47.157625  266763 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1216 11:47:47.173396  266763 ssh_runner.go:195] Run: grep 192.168.39.160	control-plane.minikube.internal$ /etc/hosts
	I1216 11:47:47.177305  266763 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:47:47.189216  266763 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:47:47.307383  266763 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:47:47.323398  266763 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939 for IP: 192.168.39.160
	I1216 11:47:47.323420  266763 certs.go:194] generating shared ca certs ...
	I1216 11:47:47.323438  266763 certs.go:226] acquiring lock for ca certs: {Name:mkd97e0b0fea726104cb674caf81491f1144961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:47:47.323619  266763 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key
	I1216 11:47:47.323658  266763 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key
	I1216 11:47:47.323667  266763 certs.go:256] generating profile certs ...
	I1216 11:47:47.323721  266763 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.key
	I1216 11:47:47.323746  266763 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt with IP's: []
	I1216 11:47:47.432099  266763 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt ...
	I1216 11:47:47.432133  266763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: {Name:mkef401ef6283c63cccfaffb05322acaede5f9b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:47:47.432313  266763 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.key ...
	I1216 11:47:47.432324  266763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.key: {Name:mkee2b02a6f87177e618beeb0b3fb8d5cf6ffe38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:47:47.432399  266763 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.key.4bf074cc
	I1216 11:47:47.432415  266763 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.crt.4bf074cc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.160]
	I1216 11:47:47.567056  266763 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.crt.4bf074cc ...
	I1216 11:47:47.567093  266763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.crt.4bf074cc: {Name:mkf3061484592f2b5f187247178a98e3bb57f813 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:47:47.567310  266763 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.key.4bf074cc ...
	I1216 11:47:47.567331  266763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.key.4bf074cc: {Name:mk84f3f6f1f3aea908334a13e77616f5aaec82bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:47:47.567446  266763 certs.go:381] copying /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.crt.4bf074cc -> /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.crt
	I1216 11:47:47.567523  266763 certs.go:385] copying /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.key.4bf074cc -> /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.key
	I1216 11:47:47.567580  266763 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/proxy-client.key
	I1216 11:47:47.567599  266763 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/proxy-client.crt with IP's: []
	I1216 11:47:47.868130  266763 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/proxy-client.crt ...
	I1216 11:47:47.868163  266763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/proxy-client.crt: {Name:mk0b08408d4639480c011811a4c0c889118467f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:47:47.868354  266763 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/proxy-client.key ...
	I1216 11:47:47.868376  266763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/proxy-client.key: {Name:mk8720625e48ddbe596dd43ef99c8c0c7071808e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:47:47.868584  266763 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem (1338 bytes)
	W1216 11:47:47.868627  266763 certs.go:480] ignoring /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519_empty.pem, impossibly tiny 0 bytes
	I1216 11:47:47.868641  266763 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 11:47:47.868662  266763 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem (1078 bytes)
	I1216 11:47:47.868686  266763 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem (1123 bytes)
	I1216 11:47:47.868709  266763 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem (1679 bytes)
	I1216 11:47:47.868745  266763 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:47:47.869514  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 11:47:47.894542  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 11:47:47.916576  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 11:47:47.940470  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 11:47:47.966092  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 11:47:47.997237  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 11:47:48.036329  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 11:47:48.069462  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 11:47:48.094563  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem --> /usr/share/ca-certificates/217519.pem (1338 bytes)
	I1216 11:47:48.118943  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /usr/share/ca-certificates/2175192.pem (1708 bytes)
	I1216 11:47:48.144451  266763 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 11:47:48.167718  266763 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 11:47:48.183754  266763 ssh_runner.go:195] Run: openssl version
	I1216 11:47:48.189651  266763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/217519.pem && ln -fs /usr/share/ca-certificates/217519.pem /etc/ssl/certs/217519.pem"
	I1216 11:47:48.199776  266763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/217519.pem
	I1216 11:47:48.204143  266763 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 10:42 /usr/share/ca-certificates/217519.pem
	I1216 11:47:48.204203  266763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/217519.pem
	I1216 11:47:48.209806  266763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/217519.pem /etc/ssl/certs/51391683.0"
	I1216 11:47:48.220418  266763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2175192.pem && ln -fs /usr/share/ca-certificates/2175192.pem /etc/ssl/certs/2175192.pem"
	I1216 11:47:48.231045  266763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2175192.pem
	I1216 11:47:48.235411  266763 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 10:42 /usr/share/ca-certificates/2175192.pem
	I1216 11:47:48.235465  266763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2175192.pem
	I1216 11:47:48.241136  266763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2175192.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 11:47:48.252098  266763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 11:47:48.263294  266763 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:47:48.267970  266763 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:47:48.268038  266763 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:47:48.273651  266763 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 11:47:48.284549  266763 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 11:47:48.288576  266763 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 11:47:48.288639  266763 kubeadm.go:392] StartCluster: {Name:flannel-560939 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:flannel-560939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.39.160 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:47:48.288742  266763 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 11:47:48.288811  266763 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:47:48.328891  266763 cri.go:89] found id: ""
	I1216 11:47:48.328976  266763 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 11:47:48.339151  266763 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:47:48.348741  266763 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:47:48.359900  266763 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:47:48.359924  266763 kubeadm.go:157] found existing configuration files:
	
	I1216 11:47:48.359976  266763 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:47:48.369497  266763 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:47:48.369564  266763 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:47:48.380266  266763 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:47:48.390667  266763 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:47:48.390734  266763 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:47:48.401499  266763 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:47:48.411581  266763 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:47:48.411662  266763 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:47:48.421741  266763 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:47:48.431375  266763 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:47:48.431441  266763 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:47:48.441280  266763 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 11:47:48.489496  266763 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1216 11:47:48.489589  266763 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 11:47:48.586720  266763 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 11:47:48.586819  266763 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 11:47:48.586979  266763 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 11:47:48.597615  266763 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 11:47:48.678874  266763 out.go:235]   - Generating certificates and keys ...
	I1216 11:47:48.678985  266763 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 11:47:48.679066  266763 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 11:47:48.679160  266763 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 11:47:48.840567  266763 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 11:47:49.026109  266763 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 11:47:49.173199  266763 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 11:47:49.452138  266763 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 11:47:49.452313  266763 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-560939 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	I1216 11:47:49.600487  266763 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 11:47:49.600653  266763 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-560939 localhost] and IPs [192.168.39.160 127.0.0.1 ::1]
	I1216 11:47:49.644649  266763 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 11:47:49.716212  266763 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 11:47:49.911341  266763 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 11:47:49.911460  266763 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 11:47:50.014251  266763 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 11:47:50.078669  266763 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 11:47:50.175607  266763 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 11:47:50.354162  266763 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 11:47:50.486852  266763 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 11:47:50.487395  266763 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 11:47:50.489889  266763 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 11:47:51.297782  268169 start.go:364] duration metric: took 26.115759897s to acquireMachinesLock for "bridge-560939"
	I1216 11:47:51.297913  268169 start.go:93] Provisioning new machine with config: &{Name:bridge-560939 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:bridge-560939 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 11:47:51.298085  268169 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 11:47:51.301000  268169 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 11:47:51.301260  268169 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:47:51.301331  268169 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:47:51.317917  268169 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I1216 11:47:51.318415  268169 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:47:51.318989  268169 main.go:141] libmachine: Using API Version  1
	I1216 11:47:51.319016  268169 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:47:51.319326  268169 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:47:51.319515  268169 main.go:141] libmachine: (bridge-560939) Calling .GetMachineName
	I1216 11:47:51.319685  268169 main.go:141] libmachine: (bridge-560939) Calling .DriverName
	I1216 11:47:51.319849  268169 start.go:159] libmachine.API.Create for "bridge-560939" (driver="kvm2")
	I1216 11:47:51.319881  268169 client.go:168] LocalClient.Create starting
	I1216 11:47:51.319916  268169 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem
	I1216 11:47:51.319956  268169 main.go:141] libmachine: Decoding PEM data...
	I1216 11:47:51.319975  268169 main.go:141] libmachine: Parsing certificate...
	I1216 11:47:51.320047  268169 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem
	I1216 11:47:51.320073  268169 main.go:141] libmachine: Decoding PEM data...
	I1216 11:47:51.320090  268169 main.go:141] libmachine: Parsing certificate...
	I1216 11:47:51.320117  268169 main.go:141] libmachine: Running pre-create checks...
	I1216 11:47:51.320131  268169 main.go:141] libmachine: (bridge-560939) Calling .PreCreateCheck
	I1216 11:47:51.320483  268169 main.go:141] libmachine: (bridge-560939) Calling .GetConfigRaw
	I1216 11:47:51.320925  268169 main.go:141] libmachine: Creating machine...
	I1216 11:47:51.320942  268169 main.go:141] libmachine: (bridge-560939) Calling .Create
	I1216 11:47:51.321136  268169 main.go:141] libmachine: (bridge-560939) creating KVM machine...
	I1216 11:47:51.321158  268169 main.go:141] libmachine: (bridge-560939) creating network...
	I1216 11:47:51.322244  268169 main.go:141] libmachine: (bridge-560939) DBG | found existing default KVM network
	I1216 11:47:51.323694  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:51.323530  269660 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:57:0c:7d} reservation:<nil>}
	I1216 11:47:51.324894  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:51.324812  269660 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000314010}
	I1216 11:47:51.324970  268169 main.go:141] libmachine: (bridge-560939) DBG | created network xml: 
	I1216 11:47:51.324992  268169 main.go:141] libmachine: (bridge-560939) DBG | <network>
	I1216 11:47:51.325004  268169 main.go:141] libmachine: (bridge-560939) DBG |   <name>mk-bridge-560939</name>
	I1216 11:47:51.325015  268169 main.go:141] libmachine: (bridge-560939) DBG |   <dns enable='no'/>
	I1216 11:47:51.325028  268169 main.go:141] libmachine: (bridge-560939) DBG |   
	I1216 11:47:51.325037  268169 main.go:141] libmachine: (bridge-560939) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1216 11:47:51.325048  268169 main.go:141] libmachine: (bridge-560939) DBG |     <dhcp>
	I1216 11:47:51.325064  268169 main.go:141] libmachine: (bridge-560939) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1216 11:47:51.325079  268169 main.go:141] libmachine: (bridge-560939) DBG |     </dhcp>
	I1216 11:47:51.325088  268169 main.go:141] libmachine: (bridge-560939) DBG |   </ip>
	I1216 11:47:51.325094  268169 main.go:141] libmachine: (bridge-560939) DBG |   
	I1216 11:47:51.325103  268169 main.go:141] libmachine: (bridge-560939) DBG | </network>
	I1216 11:47:51.325113  268169 main.go:141] libmachine: (bridge-560939) DBG | 
	I1216 11:47:51.330471  268169 main.go:141] libmachine: (bridge-560939) DBG | trying to create private KVM network mk-bridge-560939 192.168.50.0/24...
	I1216 11:47:51.407226  268169 main.go:141] libmachine: (bridge-560939) setting up store path in /home/jenkins/minikube-integration/20107-210204/.minikube/machines/bridge-560939 ...
	I1216 11:47:51.407253  268169 main.go:141] libmachine: (bridge-560939) DBG | private KVM network mk-bridge-560939 192.168.50.0/24 created
	I1216 11:47:51.407269  268169 main.go:141] libmachine: (bridge-560939) building disk image from file:///home/jenkins/minikube-integration/20107-210204/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1216 11:47:51.407287  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:51.407171  269660 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:47:51.407446  268169 main.go:141] libmachine: (bridge-560939) Downloading /home/jenkins/minikube-integration/20107-210204/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20107-210204/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1216 11:47:51.703922  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:51.703774  269660 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/bridge-560939/id_rsa...
	I1216 11:47:51.961643  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:51.961499  269660 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/bridge-560939/bridge-560939.rawdisk...
	I1216 11:47:51.961685  268169 main.go:141] libmachine: (bridge-560939) DBG | Writing magic tar header
	I1216 11:47:51.961699  268169 main.go:141] libmachine: (bridge-560939) DBG | Writing SSH key tar header
	I1216 11:47:51.961711  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:51.961616  269660 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20107-210204/.minikube/machines/bridge-560939 ...
	I1216 11:47:51.961728  268169 main.go:141] libmachine: (bridge-560939) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/bridge-560939
	I1216 11:47:51.961738  268169 main.go:141] libmachine: (bridge-560939) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube/machines
	I1216 11:47:51.961752  268169 main.go:141] libmachine: (bridge-560939) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:47:51.961761  268169 main.go:141] libmachine: (bridge-560939) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204
	I1216 11:47:51.961773  268169 main.go:141] libmachine: (bridge-560939) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1216 11:47:51.961787  268169 main.go:141] libmachine: (bridge-560939) DBG | checking permissions on dir: /home/jenkins
	I1216 11:47:51.961798  268169 main.go:141] libmachine: (bridge-560939) DBG | checking permissions on dir: /home
	I1216 11:47:51.961808  268169 main.go:141] libmachine: (bridge-560939) DBG | skipping /home - not owner
	I1216 11:47:51.961830  268169 main.go:141] libmachine: (bridge-560939) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube/machines/bridge-560939 (perms=drwx------)
	I1216 11:47:51.961848  268169 main.go:141] libmachine: (bridge-560939) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube/machines (perms=drwxr-xr-x)
	I1216 11:47:51.961871  268169 main.go:141] libmachine: (bridge-560939) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube (perms=drwxr-xr-x)
	I1216 11:47:51.961882  268169 main.go:141] libmachine: (bridge-560939) setting executable bit set on /home/jenkins/minikube-integration/20107-210204 (perms=drwxrwxr-x)
	I1216 11:47:51.961896  268169 main.go:141] libmachine: (bridge-560939) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 11:47:51.961907  268169 main.go:141] libmachine: (bridge-560939) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 11:47:51.961917  268169 main.go:141] libmachine: (bridge-560939) creating domain...
	I1216 11:47:51.963077  268169 main.go:141] libmachine: (bridge-560939) define libvirt domain using xml: 
	I1216 11:47:51.963097  268169 main.go:141] libmachine: (bridge-560939) <domain type='kvm'>
	I1216 11:47:51.963106  268169 main.go:141] libmachine: (bridge-560939)   <name>bridge-560939</name>
	I1216 11:47:51.963113  268169 main.go:141] libmachine: (bridge-560939)   <memory unit='MiB'>3072</memory>
	I1216 11:47:51.963120  268169 main.go:141] libmachine: (bridge-560939)   <vcpu>2</vcpu>
	I1216 11:47:51.963126  268169 main.go:141] libmachine: (bridge-560939)   <features>
	I1216 11:47:51.963134  268169 main.go:141] libmachine: (bridge-560939)     <acpi/>
	I1216 11:47:51.963141  268169 main.go:141] libmachine: (bridge-560939)     <apic/>
	I1216 11:47:51.963151  268169 main.go:141] libmachine: (bridge-560939)     <pae/>
	I1216 11:47:51.963158  268169 main.go:141] libmachine: (bridge-560939)     
	I1216 11:47:51.963168  268169 main.go:141] libmachine: (bridge-560939)   </features>
	I1216 11:47:51.963175  268169 main.go:141] libmachine: (bridge-560939)   <cpu mode='host-passthrough'>
	I1216 11:47:51.963185  268169 main.go:141] libmachine: (bridge-560939)   
	I1216 11:47:51.963190  268169 main.go:141] libmachine: (bridge-560939)   </cpu>
	I1216 11:47:51.963201  268169 main.go:141] libmachine: (bridge-560939)   <os>
	I1216 11:47:51.963207  268169 main.go:141] libmachine: (bridge-560939)     <type>hvm</type>
	I1216 11:47:51.963217  268169 main.go:141] libmachine: (bridge-560939)     <boot dev='cdrom'/>
	I1216 11:47:51.963224  268169 main.go:141] libmachine: (bridge-560939)     <boot dev='hd'/>
	I1216 11:47:51.963236  268169 main.go:141] libmachine: (bridge-560939)     <bootmenu enable='no'/>
	I1216 11:47:51.963242  268169 main.go:141] libmachine: (bridge-560939)   </os>
	I1216 11:47:51.963251  268169 main.go:141] libmachine: (bridge-560939)   <devices>
	I1216 11:47:51.963259  268169 main.go:141] libmachine: (bridge-560939)     <disk type='file' device='cdrom'>
	I1216 11:47:51.963276  268169 main.go:141] libmachine: (bridge-560939)       <source file='/home/jenkins/minikube-integration/20107-210204/.minikube/machines/bridge-560939/boot2docker.iso'/>
	I1216 11:47:51.963285  268169 main.go:141] libmachine: (bridge-560939)       <target dev='hdc' bus='scsi'/>
	I1216 11:47:51.963293  268169 main.go:141] libmachine: (bridge-560939)       <readonly/>
	I1216 11:47:51.963301  268169 main.go:141] libmachine: (bridge-560939)     </disk>
	I1216 11:47:51.963330  268169 main.go:141] libmachine: (bridge-560939)     <disk type='file' device='disk'>
	I1216 11:47:51.963343  268169 main.go:141] libmachine: (bridge-560939)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1216 11:47:51.963357  268169 main.go:141] libmachine: (bridge-560939)       <source file='/home/jenkins/minikube-integration/20107-210204/.minikube/machines/bridge-560939/bridge-560939.rawdisk'/>
	I1216 11:47:51.963367  268169 main.go:141] libmachine: (bridge-560939)       <target dev='hda' bus='virtio'/>
	I1216 11:47:51.963374  268169 main.go:141] libmachine: (bridge-560939)     </disk>
	I1216 11:47:51.963384  268169 main.go:141] libmachine: (bridge-560939)     <interface type='network'>
	I1216 11:47:51.963393  268169 main.go:141] libmachine: (bridge-560939)       <source network='mk-bridge-560939'/>
	I1216 11:47:51.963406  268169 main.go:141] libmachine: (bridge-560939)       <model type='virtio'/>
	I1216 11:47:51.963417  268169 main.go:141] libmachine: (bridge-560939)     </interface>
	I1216 11:47:51.963424  268169 main.go:141] libmachine: (bridge-560939)     <interface type='network'>
	I1216 11:47:51.963434  268169 main.go:141] libmachine: (bridge-560939)       <source network='default'/>
	I1216 11:47:51.963443  268169 main.go:141] libmachine: (bridge-560939)       <model type='virtio'/>
	I1216 11:47:51.963450  268169 main.go:141] libmachine: (bridge-560939)     </interface>
	I1216 11:47:51.963460  268169 main.go:141] libmachine: (bridge-560939)     <serial type='pty'>
	I1216 11:47:51.963467  268169 main.go:141] libmachine: (bridge-560939)       <target port='0'/>
	I1216 11:47:51.963476  268169 main.go:141] libmachine: (bridge-560939)     </serial>
	I1216 11:47:51.963483  268169 main.go:141] libmachine: (bridge-560939)     <console type='pty'>
	I1216 11:47:51.963493  268169 main.go:141] libmachine: (bridge-560939)       <target type='serial' port='0'/>
	I1216 11:47:51.963500  268169 main.go:141] libmachine: (bridge-560939)     </console>
	I1216 11:47:51.963514  268169 main.go:141] libmachine: (bridge-560939)     <rng model='virtio'>
	I1216 11:47:51.963526  268169 main.go:141] libmachine: (bridge-560939)       <backend model='random'>/dev/random</backend>
	I1216 11:47:51.963534  268169 main.go:141] libmachine: (bridge-560939)     </rng>
	I1216 11:47:51.963541  268169 main.go:141] libmachine: (bridge-560939)     
	I1216 11:47:51.963546  268169 main.go:141] libmachine: (bridge-560939)     
	I1216 11:47:51.963554  268169 main.go:141] libmachine: (bridge-560939)   </devices>
	I1216 11:47:51.963560  268169 main.go:141] libmachine: (bridge-560939) </domain>
	I1216 11:47:51.963571  268169 main.go:141] libmachine: (bridge-560939) 
	I1216 11:47:51.968602  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:74:20:49 in network default
	I1216 11:47:51.969298  268169 main.go:141] libmachine: (bridge-560939) starting domain...
	I1216 11:47:51.969328  268169 main.go:141] libmachine: (bridge-560939) ensuring networks are active...
	I1216 11:47:51.969341  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:72:d2:1c in network mk-bridge-560939
	I1216 11:47:51.970053  268169 main.go:141] libmachine: (bridge-560939) Ensuring network default is active
	I1216 11:47:51.970335  268169 main.go:141] libmachine: (bridge-560939) Ensuring network mk-bridge-560939 is active
	I1216 11:47:51.970893  268169 main.go:141] libmachine: (bridge-560939) getting domain XML...
	I1216 11:47:51.971697  268169 main.go:141] libmachine: (bridge-560939) creating domain...
	I1216 11:47:53.242986  268169 main.go:141] libmachine: (bridge-560939) waiting for IP...
	I1216 11:47:53.244054  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:72:d2:1c in network mk-bridge-560939
	I1216 11:47:53.244556  268169 main.go:141] libmachine: (bridge-560939) DBG | unable to find current IP address of domain bridge-560939 in network mk-bridge-560939
	I1216 11:47:53.244637  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:53.244584  269660 retry.go:31] will retry after 228.488477ms: waiting for domain to come up
	I1216 11:47:53.475382  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:72:d2:1c in network mk-bridge-560939
	I1216 11:47:53.476165  268169 main.go:141] libmachine: (bridge-560939) DBG | unable to find current IP address of domain bridge-560939 in network mk-bridge-560939
	I1216 11:47:53.476196  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:53.476129  269660 retry.go:31] will retry after 288.612938ms: waiting for domain to come up
	I1216 11:47:53.766835  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:72:d2:1c in network mk-bridge-560939
	I1216 11:47:53.767470  268169 main.go:141] libmachine: (bridge-560939) DBG | unable to find current IP address of domain bridge-560939 in network mk-bridge-560939
	I1216 11:47:53.767652  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:53.767567  269660 retry.go:31] will retry after 340.42563ms: waiting for domain to come up
	I1216 11:47:54.109470  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:72:d2:1c in network mk-bridge-560939
	I1216 11:47:54.110107  268169 main.go:141] libmachine: (bridge-560939) DBG | unable to find current IP address of domain bridge-560939 in network mk-bridge-560939
	I1216 11:47:54.110133  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:54.110037  269660 retry.go:31] will retry after 467.083342ms: waiting for domain to come up
	I1216 11:47:54.578700  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:72:d2:1c in network mk-bridge-560939
	I1216 11:47:54.579403  268169 main.go:141] libmachine: (bridge-560939) DBG | unable to find current IP address of domain bridge-560939 in network mk-bridge-560939
	I1216 11:47:54.579437  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:54.579361  269660 retry.go:31] will retry after 671.958202ms: waiting for domain to come up
	I1216 11:47:51.039884  267586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 11:47:51.039925  267586 machine.go:96] duration metric: took 8.948291922s to provisionDockerMachine
	I1216 11:47:51.039941  267586 start.go:293] postStartSetup for "kubernetes-upgrade-854528" (driver="kvm2")
	I1216 11:47:51.039955  267586 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 11:47:51.039989  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:47:51.040434  267586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 11:47:51.040479  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:47:51.043824  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:51.044275  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:51.044303  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:51.044545  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:47:51.044784  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:51.044983  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:47:51.045116  267586 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/id_rsa Username:docker}
	I1216 11:47:51.131375  267586 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 11:47:51.136070  267586 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 11:47:51.136102  267586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/addons for local assets ...
	I1216 11:47:51.136174  267586 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/files for local assets ...
	I1216 11:47:51.136251  267586 filesync.go:149] local asset: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem -> 2175192.pem in /etc/ssl/certs
	I1216 11:47:51.136349  267586 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 11:47:51.147680  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:47:51.178117  267586 start.go:296] duration metric: took 138.155402ms for postStartSetup
	I1216 11:47:51.178185  267586 fix.go:56] duration metric: took 9.112577938s for fixHost
	I1216 11:47:51.178323  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:47:51.181716  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:51.182131  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:51.182167  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:51.182431  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:47:51.182664  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:51.182843  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:51.182978  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:47:51.183126  267586 main.go:141] libmachine: Using SSH client type: native
	I1216 11:47:51.183298  267586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.182 22 <nil> <nil>}
	I1216 11:47:51.183308  267586 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 11:47:51.297579  267586 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734349671.292128292
	
	I1216 11:47:51.297607  267586 fix.go:216] guest clock: 1734349671.292128292
	I1216 11:47:51.297619  267586 fix.go:229] Guest: 2024-12-16 11:47:51.292128292 +0000 UTC Remote: 2024-12-16 11:47:51.178193745 +0000 UTC m=+30.634200340 (delta=113.934547ms)
	I1216 11:47:51.297642  267586 fix.go:200] guest clock delta is within tolerance: 113.934547ms
	I1216 11:47:51.297647  267586 start.go:83] releasing machines lock for "kubernetes-upgrade-854528", held for 9.2320719s
	I1216 11:47:51.297673  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:47:51.297940  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetIP
	I1216 11:47:51.301009  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:51.301484  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:51.301534  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:51.301677  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:47:51.302263  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:47:51.302451  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .DriverName
	I1216 11:47:51.302564  267586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 11:47:51.302615  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:47:51.302662  267586 ssh_runner.go:195] Run: cat /version.json
	I1216 11:47:51.302684  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHHostname
	I1216 11:47:51.305352  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:51.305735  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:51.305764  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:51.305798  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:51.305925  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:47:51.306108  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:51.306226  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:51.306256  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:51.306260  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:47:51.306411  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHPort
	I1216 11:47:51.306417  267586 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/id_rsa Username:docker}
	I1216 11:47:51.306561  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHKeyPath
	I1216 11:47:51.306730  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetSSHUsername
	I1216 11:47:51.306880  267586 sshutil.go:53] new ssh client: &{IP:192.168.61.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/kubernetes-upgrade-854528/id_rsa Username:docker}
	I1216 11:47:51.418587  267586 ssh_runner.go:195] Run: systemctl --version
	I1216 11:47:51.425647  267586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 11:47:51.577337  267586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 11:47:51.584727  267586 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 11:47:51.584815  267586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:47:51.594188  267586 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 11:47:51.594221  267586 start.go:495] detecting cgroup driver to use...
	I1216 11:47:51.594288  267586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 11:47:51.611222  267586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 11:47:51.627276  267586 docker.go:217] disabling cri-docker service (if available) ...
	I1216 11:47:51.627342  267586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 11:47:51.645147  267586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 11:47:51.659586  267586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 11:47:51.908064  267586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 11:47:52.255084  267586 docker.go:233] disabling docker service ...
	I1216 11:47:52.255169  267586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 11:47:52.407154  267586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 11:47:52.531084  267586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 11:47:52.949116  267586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 11:47:53.343199  267586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 11:47:53.556252  267586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 11:47:53.655955  267586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 11:47:53.656030  267586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:53.693585  267586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 11:47:53.693673  267586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:53.746760  267586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:53.786442  267586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:53.837089  267586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 11:47:53.891972  267586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:53.910257  267586 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:53.944477  267586 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:47:53.962616  267586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 11:47:53.977947  267586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 11:47:53.992628  267586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:47:54.379749  267586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 11:47:55.276787  267586 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 11:47:55.276877  267586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 11:47:55.282704  267586 start.go:563] Will wait 60s for crictl version
	I1216 11:47:55.282796  267586 ssh_runner.go:195] Run: which crictl
	I1216 11:47:55.287025  267586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 11:47:55.324700  267586 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 11:47:55.324792  267586 ssh_runner.go:195] Run: crio --version
	I1216 11:47:55.357043  267586 ssh_runner.go:195] Run: crio --version
	I1216 11:47:55.392264  267586 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1216 11:47:50.503161  266763 out.go:235]   - Booting up control plane ...
	I1216 11:47:50.503296  266763 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 11:47:50.503437  266763 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 11:47:50.503538  266763 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 11:47:50.513422  266763 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 11:47:50.519769  266763 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 11:47:50.519834  266763 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 11:47:50.651832  266763 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 11:47:50.652009  266763 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 11:47:51.652876  266763 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001188253s
	I1216 11:47:51.653001  266763 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 11:47:55.393897  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) Calling .GetIP
	I1216 11:47:55.397576  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:55.398058  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:39:cd", ip: ""} in network mk-kubernetes-upgrade-854528: {Iface:virbr2 ExpiryTime:2024-12-16 12:46:50 +0000 UTC Type:0 Mac:52:54:00:b2:39:cd Iaid: IPaddr:192.168.61.182 Prefix:24 Hostname:kubernetes-upgrade-854528 Clientid:01:52:54:00:b2:39:cd}
	I1216 11:47:55.398092  267586 main.go:141] libmachine: (kubernetes-upgrade-854528) DBG | domain kubernetes-upgrade-854528 has defined IP address 192.168.61.182 and MAC address 52:54:00:b2:39:cd in network mk-kubernetes-upgrade-854528
	I1216 11:47:55.398422  267586 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 11:47:55.403391  267586 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-854528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-854528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 11:47:55.403535  267586 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:47:55.403602  267586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:47:55.450111  267586 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 11:47:55.450137  267586 crio.go:433] Images already preloaded, skipping extraction
	I1216 11:47:55.450192  267586 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:47:55.485220  267586 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 11:47:55.485244  267586 cache_images.go:84] Images are preloaded, skipping loading
	I1216 11:47:55.485253  267586 kubeadm.go:934] updating node { 192.168.61.182 8443 v1.31.2 crio true true} ...
	I1216 11:47:55.485380  267586 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-854528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-854528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 11:47:55.485485  267586 ssh_runner.go:195] Run: crio config
	I1216 11:47:55.532426  267586 cni.go:84] Creating CNI manager for ""
	I1216 11:47:55.532461  267586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:47:55.532481  267586 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 11:47:55.532513  267586 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.182 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-854528 NodeName:kubernetes-upgrade-854528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 11:47:55.532734  267586 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-854528"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.182"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.182"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 11:47:55.532812  267586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1216 11:47:55.543108  267586 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 11:47:55.543195  267586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 11:47:55.552833  267586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1216 11:47:55.570442  267586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 11:47:55.589392  267586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1216 11:47:56.654087  266763 kubeadm.go:310] [api-check] The API server is healthy after 5.003011818s
	I1216 11:47:56.671463  266763 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 11:47:56.691547  266763 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 11:47:56.726267  266763 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 11:47:56.726634  266763 kubeadm.go:310] [mark-control-plane] Marking the node flannel-560939 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 11:47:56.744691  266763 kubeadm.go:310] [bootstrap-token] Using token: pryq1i.rw4gx5atxuq3o0sn
	I1216 11:47:56.746289  266763 out.go:235]   - Configuring RBAC rules ...
	I1216 11:47:56.746510  266763 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 11:47:56.758880  266763 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 11:47:56.768155  266763 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 11:47:56.778725  266763 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 11:47:56.785937  266763 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 11:47:56.793487  266763 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 11:47:57.063006  266763 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 11:47:57.504148  266763 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 11:47:58.063513  266763 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 11:47:58.063542  266763 kubeadm.go:310] 
	I1216 11:47:58.063668  266763 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 11:47:58.063700  266763 kubeadm.go:310] 
	I1216 11:47:58.063811  266763 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 11:47:58.063820  266763 kubeadm.go:310] 
	I1216 11:47:58.063854  266763 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 11:47:58.063935  266763 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 11:47:58.064024  266763 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 11:47:58.064034  266763 kubeadm.go:310] 
	I1216 11:47:58.064131  266763 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 11:47:58.064151  266763 kubeadm.go:310] 
	I1216 11:47:58.064220  266763 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 11:47:58.064230  266763 kubeadm.go:310] 
	I1216 11:47:58.064301  266763 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 11:47:58.064416  266763 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 11:47:58.064529  266763 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 11:47:58.064540  266763 kubeadm.go:310] 
	I1216 11:47:58.064640  266763 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 11:47:58.064762  266763 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 11:47:58.064777  266763 kubeadm.go:310] 
	I1216 11:47:58.064846  266763 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pryq1i.rw4gx5atxuq3o0sn \
	I1216 11:47:58.064933  266763 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:308d0b4d152056f6ea26bd81937e5552170019dfa040d2f99e94fa77bd33e210 \
	I1216 11:47:58.064963  266763 kubeadm.go:310] 	--control-plane 
	I1216 11:47:58.064972  266763 kubeadm.go:310] 
	I1216 11:47:58.065059  266763 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 11:47:58.065073  266763 kubeadm.go:310] 
	I1216 11:47:58.065156  266763 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pryq1i.rw4gx5atxuq3o0sn \
	I1216 11:47:58.065307  266763 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:308d0b4d152056f6ea26bd81937e5552170019dfa040d2f99e94fa77bd33e210 
	I1216 11:47:58.065673  266763 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 11:47:58.065711  266763 cni.go:84] Creating CNI manager for "flannel"
	I1216 11:47:58.067616  266763 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I1216 11:47:55.253492  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:72:d2:1c in network mk-bridge-560939
	I1216 11:47:55.254070  268169 main.go:141] libmachine: (bridge-560939) DBG | unable to find current IP address of domain bridge-560939 in network mk-bridge-560939
	I1216 11:47:55.254105  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:55.254032  269660 retry.go:31] will retry after 842.640425ms: waiting for domain to come up
	I1216 11:47:56.098040  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:72:d2:1c in network mk-bridge-560939
	I1216 11:47:56.098514  268169 main.go:141] libmachine: (bridge-560939) DBG | unable to find current IP address of domain bridge-560939 in network mk-bridge-560939
	I1216 11:47:56.098539  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:56.098488  269660 retry.go:31] will retry after 941.026746ms: waiting for domain to come up
	I1216 11:47:57.041449  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:72:d2:1c in network mk-bridge-560939
	I1216 11:47:57.042077  268169 main.go:141] libmachine: (bridge-560939) DBG | unable to find current IP address of domain bridge-560939 in network mk-bridge-560939
	I1216 11:47:57.042106  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:57.042035  269660 retry.go:31] will retry after 899.088761ms: waiting for domain to come up
	I1216 11:47:57.943336  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:72:d2:1c in network mk-bridge-560939
	I1216 11:47:57.943919  268169 main.go:141] libmachine: (bridge-560939) DBG | unable to find current IP address of domain bridge-560939 in network mk-bridge-560939
	I1216 11:47:57.943989  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:57.943915  269660 retry.go:31] will retry after 1.474533813s: waiting for domain to come up
	I1216 11:47:59.419856  268169 main.go:141] libmachine: (bridge-560939) DBG | domain bridge-560939 has defined MAC address 52:54:00:72:d2:1c in network mk-bridge-560939
	I1216 11:47:59.420365  268169 main.go:141] libmachine: (bridge-560939) DBG | unable to find current IP address of domain bridge-560939 in network mk-bridge-560939
	I1216 11:47:59.420417  268169 main.go:141] libmachine: (bridge-560939) DBG | I1216 11:47:59.420325  269660 retry.go:31] will retry after 1.86969745s: waiting for domain to come up
	I1216 11:47:58.068860  266763 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 11:47:58.075098  266763 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1216 11:47:58.075120  266763 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I1216 11:47:58.097776  266763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 11:47:58.528053  266763 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 11:47:58.528215  266763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-560939 minikube.k8s.io/updated_at=2024_12_16T11_47_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8 minikube.k8s.io/name=flannel-560939 minikube.k8s.io/primary=true
	I1216 11:47:58.528221  266763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:47:58.555590  266763 ops.go:34] apiserver oom_adj: -16
	I1216 11:47:58.675609  266763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:47:59.175909  266763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:47:59.675860  266763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:48:00.176022  266763 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:47:55.607897  267586 ssh_runner.go:195] Run: grep 192.168.61.182	control-plane.minikube.internal$ /etc/hosts
	I1216 11:47:55.612171  267586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:47:55.762087  267586 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:47:55.780082  267586 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528 for IP: 192.168.61.182
	I1216 11:47:55.780121  267586 certs.go:194] generating shared ca certs ...
	I1216 11:47:55.780152  267586 certs.go:226] acquiring lock for ca certs: {Name:mkd97e0b0fea726104cb674caf81491f1144961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:47:55.780377  267586 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key
	I1216 11:47:55.780449  267586 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key
	I1216 11:47:55.780467  267586 certs.go:256] generating profile certs ...
	I1216 11:47:55.780618  267586 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/client.key
	I1216 11:47:55.780698  267586 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.key.ae3c87f7
	I1216 11:47:55.780759  267586 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/proxy-client.key
	I1216 11:47:55.780986  267586 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem (1338 bytes)
	W1216 11:47:55.781044  267586 certs.go:480] ignoring /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519_empty.pem, impossibly tiny 0 bytes
	I1216 11:47:55.781063  267586 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 11:47:55.781105  267586 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem (1078 bytes)
	I1216 11:47:55.781159  267586 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem (1123 bytes)
	I1216 11:47:55.781201  267586 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem (1679 bytes)
	I1216 11:47:55.781282  267586 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:47:55.782175  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 11:47:55.821807  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 11:47:55.854301  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 11:47:55.887399  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 11:47:55.915344  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1216 11:47:55.944776  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 11:47:55.971152  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 11:47:56.003320  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kubernetes-upgrade-854528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 11:47:56.031705  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /usr/share/ca-certificates/2175192.pem (1708 bytes)
	I1216 11:47:56.058701  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 11:47:56.089693  267586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem --> /usr/share/ca-certificates/217519.pem (1338 bytes)
	I1216 11:47:56.120363  267586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 11:47:56.139087  267586 ssh_runner.go:195] Run: openssl version
	I1216 11:47:56.145454  267586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2175192.pem && ln -fs /usr/share/ca-certificates/2175192.pem /etc/ssl/certs/2175192.pem"
	I1216 11:47:56.159570  267586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2175192.pem
	I1216 11:47:56.165195  267586 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 10:42 /usr/share/ca-certificates/2175192.pem
	I1216 11:47:56.165265  267586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2175192.pem
	I1216 11:47:56.172759  267586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2175192.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 11:47:56.185740  267586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 11:47:56.199725  267586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:47:56.205652  267586 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:47:56.205722  267586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:47:56.211886  267586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 11:47:56.221778  267586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/217519.pem && ln -fs /usr/share/ca-certificates/217519.pem /etc/ssl/certs/217519.pem"
	I1216 11:47:56.233369  267586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/217519.pem
	I1216 11:47:56.238137  267586 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 10:42 /usr/share/ca-certificates/217519.pem
	I1216 11:47:56.238217  267586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/217519.pem
	I1216 11:47:56.244512  267586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/217519.pem /etc/ssl/certs/51391683.0"
	I1216 11:47:56.316615  267586 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 11:47:56.334311  267586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 11:47:56.407439  267586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 11:47:56.456210  267586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 11:47:56.506870  267586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 11:47:56.530314  267586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 11:47:56.548814  267586 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 11:47:56.643488  267586 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-854528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-854528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.182 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:47:56.643630  267586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 11:47:56.643703  267586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:47:56.897966  267586 cri.go:89] found id: "740ad1e1174ddde2f963a61ece59b5a7b49f6811c0d6619381724c66850ac5b9"
	I1216 11:47:56.897997  267586 cri.go:89] found id: "fa10d1e2d9b7383aa3b1682ab7a7b36d05e6562b7372668edcd8b5ee704c5bb7"
	I1216 11:47:56.898005  267586 cri.go:89] found id: "005f4c81c153a40289bfeee3751b2a0dfa56616b289a8de597b994676844964e"
	I1216 11:47:56.898010  267586 cri.go:89] found id: "27e16936256e0973dde572bdcd25c188a6594456e5154253d8753f5b74a951fa"
	I1216 11:47:56.898014  267586 cri.go:89] found id: "63a6491ddda8fc606aa6c802970ff4df0437b474a2bea29f0d5d882351bffa07"
	I1216 11:47:56.898019  267586 cri.go:89] found id: "519e47ce67f575fc81d7360125d973dd00ac785e927c9d9a775ca6a66d13fce7"
	I1216 11:47:56.898023  267586 cri.go:89] found id: "bdbb47b3b665677da774d99ccff797a475e25dbaa6bbd18eca6f100cb726c092"
	I1216 11:47:56.898027  267586 cri.go:89] found id: "da5cc9845deabaed9ae08432477e750346fb26461376b701ad02fb7a46e12115"
	I1216 11:47:56.898030  267586 cri.go:89] found id: "b0d38a6b503c8644eddbcb6de93f698839defe2db2e065cc3c11bd56b6797c06"
	I1216 11:47:56.898039  267586 cri.go:89] found id: "acfa6012a2f79ee278919436b6dda6c2653112f0ba56e91a390e857037e802eb"
	I1216 11:47:56.898044  267586 cri.go:89] found id: ""
	I1216 11:47:56.898095  267586 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-854528 -n kubernetes-upgrade-854528
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-854528 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-854528" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-854528
--- FAIL: TestKubernetesUpgrade (438.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (311.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-933974 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-933974 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m11.30778623s)

                                                
                                                
-- stdout --
	* [old-k8s-version-933974] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-933974" primary control-plane node in "old-k8s-version-933974" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:47:37.825024  269561 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:47:37.825176  269561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:47:37.825186  269561 out.go:358] Setting ErrFile to fd 2...
	I1216 11:47:37.825191  269561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:47:37.825396  269561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:47:37.826075  269561 out.go:352] Setting JSON to false
	I1216 11:47:37.827220  269561 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12605,"bootTime":1734337053,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 11:47:37.827328  269561 start.go:139] virtualization: kvm guest
	I1216 11:47:37.829287  269561 out.go:177] * [old-k8s-version-933974] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 11:47:37.830683  269561 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:47:37.830738  269561 notify.go:220] Checking for updates...
	I1216 11:47:37.832934  269561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:47:37.834070  269561 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:47:37.835212  269561 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:47:37.836246  269561 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 11:47:37.837358  269561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:47:37.838978  269561 config.go:182] Loaded profile config "bridge-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:47:37.839067  269561 config.go:182] Loaded profile config "flannel-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:47:37.839139  269561 config.go:182] Loaded profile config "kubernetes-upgrade-854528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:47:37.839225  269561 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:47:37.875978  269561 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 11:47:37.877114  269561 start.go:297] selected driver: kvm2
	I1216 11:47:37.877130  269561 start.go:901] validating driver "kvm2" against <nil>
	I1216 11:47:37.877142  269561 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:47:37.877979  269561 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:47:37.878086  269561 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 11:47:37.894314  269561 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 11:47:37.894381  269561 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 11:47:37.894650  269561 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 11:47:37.894685  269561 cni.go:84] Creating CNI manager for ""
	I1216 11:47:37.894728  269561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:47:37.894737  269561 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 11:47:37.894807  269561 start.go:340] cluster config:
	{Name:old-k8s-version-933974 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-933974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:47:37.894904  269561 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:47:37.896669  269561 out.go:177] * Starting "old-k8s-version-933974" primary control-plane node in "old-k8s-version-933974" cluster
	I1216 11:47:37.897825  269561 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 11:47:37.897861  269561 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 11:47:37.897868  269561 cache.go:56] Caching tarball of preloaded images
	I1216 11:47:37.897945  269561 preload.go:172] Found /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 11:47:37.897957  269561 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 11:47:37.898052  269561 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/config.json ...
	I1216 11:47:37.898070  269561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/config.json: {Name:mkfe4215b798120ec67203e9963a4936f9ecd548 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:47:37.898191  269561 start.go:360] acquireMachinesLock for old-k8s-version-933974: {Name:mk416695f38f04563a2ca155c5383ee4d8f0f97b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 11:48:18.953688  269561 start.go:364] duration metric: took 41.055453726s to acquireMachinesLock for "old-k8s-version-933974"
	I1216 11:48:18.953766  269561 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-933974 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-933974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 11:48:18.953866  269561 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 11:48:18.955940  269561 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1216 11:48:18.956153  269561 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:48:18.956214  269561 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:48:18.976480  269561 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42487
	I1216 11:48:18.977031  269561 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:48:18.977679  269561 main.go:141] libmachine: Using API Version  1
	I1216 11:48:18.977706  269561 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:48:18.978140  269561 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:48:18.978395  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetMachineName
	I1216 11:48:18.978603  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:48:18.978799  269561 start.go:159] libmachine.API.Create for "old-k8s-version-933974" (driver="kvm2")
	I1216 11:48:18.978831  269561 client.go:168] LocalClient.Create starting
	I1216 11:48:18.978868  269561 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem
	I1216 11:48:18.978909  269561 main.go:141] libmachine: Decoding PEM data...
	I1216 11:48:18.978929  269561 main.go:141] libmachine: Parsing certificate...
	I1216 11:48:18.979012  269561 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem
	I1216 11:48:18.979046  269561 main.go:141] libmachine: Decoding PEM data...
	I1216 11:48:18.979061  269561 main.go:141] libmachine: Parsing certificate...
	I1216 11:48:18.979089  269561 main.go:141] libmachine: Running pre-create checks...
	I1216 11:48:18.979103  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .PreCreateCheck
	I1216 11:48:18.979583  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetConfigRaw
	I1216 11:48:18.980081  269561 main.go:141] libmachine: Creating machine...
	I1216 11:48:18.980102  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .Create
	I1216 11:48:18.980257  269561 main.go:141] libmachine: (old-k8s-version-933974) creating KVM machine...
	I1216 11:48:18.980279  269561 main.go:141] libmachine: (old-k8s-version-933974) creating network...
	I1216 11:48:18.981563  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found existing default KVM network
	I1216 11:48:18.982812  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:18.982640  270159 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:57:0c:7d} reservation:<nil>}
	I1216 11:48:18.983672  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:18.983583  270159 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:21:e6:80} reservation:<nil>}
	I1216 11:48:18.984603  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:18.984507  270159 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000301070}
	I1216 11:48:18.984624  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | created network xml: 
	I1216 11:48:18.984640  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | <network>
	I1216 11:48:18.984657  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG |   <name>mk-old-k8s-version-933974</name>
	I1216 11:48:18.984688  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG |   <dns enable='no'/>
	I1216 11:48:18.984711  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG |   
	I1216 11:48:18.984725  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1216 11:48:18.984754  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG |     <dhcp>
	I1216 11:48:18.984770  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1216 11:48:18.984784  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG |     </dhcp>
	I1216 11:48:18.984797  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG |   </ip>
	I1216 11:48:18.984808  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG |   
	I1216 11:48:18.984818  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | </network>
	I1216 11:48:18.984828  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | 
	I1216 11:48:18.989921  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | trying to create private KVM network mk-old-k8s-version-933974 192.168.61.0/24...
	I1216 11:48:19.068601  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | private KVM network mk-old-k8s-version-933974 192.168.61.0/24 created
	I1216 11:48:19.068648  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:19.068557  270159 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:48:19.068667  269561 main.go:141] libmachine: (old-k8s-version-933974) setting up store path in /home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974 ...
	I1216 11:48:19.068677  269561 main.go:141] libmachine: (old-k8s-version-933974) building disk image from file:///home/jenkins/minikube-integration/20107-210204/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1216 11:48:19.068693  269561 main.go:141] libmachine: (old-k8s-version-933974) Downloading /home/jenkins/minikube-integration/20107-210204/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20107-210204/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1216 11:48:19.377862  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:19.377592  270159 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa...
	I1216 11:48:19.572325  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:19.572109  270159 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/old-k8s-version-933974.rawdisk...
	I1216 11:48:19.572383  269561 main.go:141] libmachine: (old-k8s-version-933974) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974 (perms=drwx------)
	I1216 11:48:19.572398  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | Writing magic tar header
	I1216 11:48:19.572410  269561 main.go:141] libmachine: (old-k8s-version-933974) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube/machines (perms=drwxr-xr-x)
	I1216 11:48:19.572428  269561 main.go:141] libmachine: (old-k8s-version-933974) setting executable bit set on /home/jenkins/minikube-integration/20107-210204/.minikube (perms=drwxr-xr-x)
	I1216 11:48:19.572443  269561 main.go:141] libmachine: (old-k8s-version-933974) setting executable bit set on /home/jenkins/minikube-integration/20107-210204 (perms=drwxrwxr-x)
	I1216 11:48:19.572454  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | Writing SSH key tar header
	I1216 11:48:19.572473  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:19.572232  270159 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974 ...
	I1216 11:48:19.572496  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974
	I1216 11:48:19.572519  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube/machines
	I1216 11:48:19.572536  269561 main.go:141] libmachine: (old-k8s-version-933974) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 11:48:19.572546  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:48:19.572558  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20107-210204
	I1216 11:48:19.572568  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I1216 11:48:19.572579  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | checking permissions on dir: /home/jenkins
	I1216 11:48:19.572587  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | checking permissions on dir: /home
	I1216 11:48:19.572598  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | skipping /home - not owner
	I1216 11:48:19.572612  269561 main.go:141] libmachine: (old-k8s-version-933974) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 11:48:19.572624  269561 main.go:141] libmachine: (old-k8s-version-933974) creating domain...
	I1216 11:48:19.573873  269561 main.go:141] libmachine: (old-k8s-version-933974) define libvirt domain using xml: 
	I1216 11:48:19.573901  269561 main.go:141] libmachine: (old-k8s-version-933974) <domain type='kvm'>
	I1216 11:48:19.573912  269561 main.go:141] libmachine: (old-k8s-version-933974)   <name>old-k8s-version-933974</name>
	I1216 11:48:19.573923  269561 main.go:141] libmachine: (old-k8s-version-933974)   <memory unit='MiB'>2200</memory>
	I1216 11:48:19.573936  269561 main.go:141] libmachine: (old-k8s-version-933974)   <vcpu>2</vcpu>
	I1216 11:48:19.573942  269561 main.go:141] libmachine: (old-k8s-version-933974)   <features>
	I1216 11:48:19.573964  269561 main.go:141] libmachine: (old-k8s-version-933974)     <acpi/>
	I1216 11:48:19.573978  269561 main.go:141] libmachine: (old-k8s-version-933974)     <apic/>
	I1216 11:48:19.573986  269561 main.go:141] libmachine: (old-k8s-version-933974)     <pae/>
	I1216 11:48:19.573992  269561 main.go:141] libmachine: (old-k8s-version-933974)     
	I1216 11:48:19.574027  269561 main.go:141] libmachine: (old-k8s-version-933974)   </features>
	I1216 11:48:19.574066  269561 main.go:141] libmachine: (old-k8s-version-933974)   <cpu mode='host-passthrough'>
	I1216 11:48:19.574076  269561 main.go:141] libmachine: (old-k8s-version-933974)   
	I1216 11:48:19.574082  269561 main.go:141] libmachine: (old-k8s-version-933974)   </cpu>
	I1216 11:48:19.574090  269561 main.go:141] libmachine: (old-k8s-version-933974)   <os>
	I1216 11:48:19.574101  269561 main.go:141] libmachine: (old-k8s-version-933974)     <type>hvm</type>
	I1216 11:48:19.574110  269561 main.go:141] libmachine: (old-k8s-version-933974)     <boot dev='cdrom'/>
	I1216 11:48:19.574121  269561 main.go:141] libmachine: (old-k8s-version-933974)     <boot dev='hd'/>
	I1216 11:48:19.574131  269561 main.go:141] libmachine: (old-k8s-version-933974)     <bootmenu enable='no'/>
	I1216 11:48:19.574139  269561 main.go:141] libmachine: (old-k8s-version-933974)   </os>
	I1216 11:48:19.574148  269561 main.go:141] libmachine: (old-k8s-version-933974)   <devices>
	I1216 11:48:19.574159  269561 main.go:141] libmachine: (old-k8s-version-933974)     <disk type='file' device='cdrom'>
	I1216 11:48:19.574173  269561 main.go:141] libmachine: (old-k8s-version-933974)       <source file='/home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/boot2docker.iso'/>
	I1216 11:48:19.574186  269561 main.go:141] libmachine: (old-k8s-version-933974)       <target dev='hdc' bus='scsi'/>
	I1216 11:48:19.574195  269561 main.go:141] libmachine: (old-k8s-version-933974)       <readonly/>
	I1216 11:48:19.574205  269561 main.go:141] libmachine: (old-k8s-version-933974)     </disk>
	I1216 11:48:19.574215  269561 main.go:141] libmachine: (old-k8s-version-933974)     <disk type='file' device='disk'>
	I1216 11:48:19.574227  269561 main.go:141] libmachine: (old-k8s-version-933974)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1216 11:48:19.574241  269561 main.go:141] libmachine: (old-k8s-version-933974)       <source file='/home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/old-k8s-version-933974.rawdisk'/>
	I1216 11:48:19.574252  269561 main.go:141] libmachine: (old-k8s-version-933974)       <target dev='hda' bus='virtio'/>
	I1216 11:48:19.574263  269561 main.go:141] libmachine: (old-k8s-version-933974)     </disk>
	I1216 11:48:19.574272  269561 main.go:141] libmachine: (old-k8s-version-933974)     <interface type='network'>
	I1216 11:48:19.574285  269561 main.go:141] libmachine: (old-k8s-version-933974)       <source network='mk-old-k8s-version-933974'/>
	I1216 11:48:19.574298  269561 main.go:141] libmachine: (old-k8s-version-933974)       <model type='virtio'/>
	I1216 11:48:19.574310  269561 main.go:141] libmachine: (old-k8s-version-933974)     </interface>
	I1216 11:48:19.574320  269561 main.go:141] libmachine: (old-k8s-version-933974)     <interface type='network'>
	I1216 11:48:19.574330  269561 main.go:141] libmachine: (old-k8s-version-933974)       <source network='default'/>
	I1216 11:48:19.574340  269561 main.go:141] libmachine: (old-k8s-version-933974)       <model type='virtio'/>
	I1216 11:48:19.574349  269561 main.go:141] libmachine: (old-k8s-version-933974)     </interface>
	I1216 11:48:19.574358  269561 main.go:141] libmachine: (old-k8s-version-933974)     <serial type='pty'>
	I1216 11:48:19.574367  269561 main.go:141] libmachine: (old-k8s-version-933974)       <target port='0'/>
	I1216 11:48:19.574377  269561 main.go:141] libmachine: (old-k8s-version-933974)     </serial>
	I1216 11:48:19.574385  269561 main.go:141] libmachine: (old-k8s-version-933974)     <console type='pty'>
	I1216 11:48:19.574393  269561 main.go:141] libmachine: (old-k8s-version-933974)       <target type='serial' port='0'/>
	I1216 11:48:19.574411  269561 main.go:141] libmachine: (old-k8s-version-933974)     </console>
	I1216 11:48:19.574422  269561 main.go:141] libmachine: (old-k8s-version-933974)     <rng model='virtio'>
	I1216 11:48:19.574432  269561 main.go:141] libmachine: (old-k8s-version-933974)       <backend model='random'>/dev/random</backend>
	I1216 11:48:19.574442  269561 main.go:141] libmachine: (old-k8s-version-933974)     </rng>
	I1216 11:48:19.574450  269561 main.go:141] libmachine: (old-k8s-version-933974)     
	I1216 11:48:19.574460  269561 main.go:141] libmachine: (old-k8s-version-933974)     
	I1216 11:48:19.574468  269561 main.go:141] libmachine: (old-k8s-version-933974)   </devices>
	I1216 11:48:19.574478  269561 main.go:141] libmachine: (old-k8s-version-933974) </domain>
	I1216 11:48:19.574489  269561 main.go:141] libmachine: (old-k8s-version-933974) 
	I1216 11:48:19.579127  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:17:0e:b5 in network default
	I1216 11:48:19.579738  269561 main.go:141] libmachine: (old-k8s-version-933974) starting domain...
	I1216 11:48:19.579758  269561 main.go:141] libmachine: (old-k8s-version-933974) ensuring networks are active...
	I1216 11:48:19.579766  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:19.580469  269561 main.go:141] libmachine: (old-k8s-version-933974) Ensuring network default is active
	I1216 11:48:19.580759  269561 main.go:141] libmachine: (old-k8s-version-933974) Ensuring network mk-old-k8s-version-933974 is active
	I1216 11:48:19.581391  269561 main.go:141] libmachine: (old-k8s-version-933974) getting domain XML...
	I1216 11:48:19.582139  269561 main.go:141] libmachine: (old-k8s-version-933974) creating domain...
	I1216 11:48:20.967571  269561 main.go:141] libmachine: (old-k8s-version-933974) waiting for IP...
	I1216 11:48:20.968494  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:20.969079  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:20.969112  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:20.969053  270159 retry.go:31] will retry after 259.976811ms: waiting for domain to come up
	I1216 11:48:21.230656  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:21.231344  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:21.231376  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:21.231322  270159 retry.go:31] will retry after 379.666292ms: waiting for domain to come up
	I1216 11:48:21.613133  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:21.613773  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:21.613832  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:21.613747  270159 retry.go:31] will retry after 370.400634ms: waiting for domain to come up
	I1216 11:48:21.985483  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:21.985995  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:21.986120  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:21.985993  270159 retry.go:31] will retry after 382.198168ms: waiting for domain to come up
	I1216 11:48:22.369871  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:22.370491  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:22.370533  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:22.370445  270159 retry.go:31] will retry after 567.042777ms: waiting for domain to come up
	I1216 11:48:22.939189  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:22.939727  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:22.939758  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:22.939677  270159 retry.go:31] will retry after 917.185458ms: waiting for domain to come up
	I1216 11:48:23.859034  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:23.859651  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:23.859689  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:23.859626  270159 retry.go:31] will retry after 813.639392ms: waiting for domain to come up
	I1216 11:48:24.674598  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:24.675244  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:24.675281  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:24.675225  270159 retry.go:31] will retry after 1.103029475s: waiting for domain to come up
	I1216 11:48:25.780461  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:25.781008  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:25.781050  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:25.780995  270159 retry.go:31] will retry after 1.441411901s: waiting for domain to come up
	I1216 11:48:27.224826  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:27.225349  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:27.225404  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:27.225338  270159 retry.go:31] will retry after 2.139350607s: waiting for domain to come up
	I1216 11:48:29.367015  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:29.367721  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:29.367751  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:29.367687  270159 retry.go:31] will retry after 2.346123865s: waiting for domain to come up
	I1216 11:48:31.715131  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:31.715667  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:31.715696  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:31.715623  270159 retry.go:31] will retry after 2.437445621s: waiting for domain to come up
	I1216 11:48:34.155079  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:34.155621  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:34.155652  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:34.155583  270159 retry.go:31] will retry after 4.284569216s: waiting for domain to come up
	I1216 11:48:38.443181  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:38.443704  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:48:38.443722  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:48:38.443678  270159 retry.go:31] will retry after 4.237737952s: waiting for domain to come up
	I1216 11:48:42.682845  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:42.683396  269561 main.go:141] libmachine: (old-k8s-version-933974) found domain IP: 192.168.61.2
	I1216 11:48:42.683426  269561 main.go:141] libmachine: (old-k8s-version-933974) reserving static IP address...
	I1216 11:48:42.683449  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has current primary IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:42.683865  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-933974", mac: "52:54:00:70:69:8b", ip: "192.168.61.2"} in network mk-old-k8s-version-933974
	I1216 11:48:42.765580  269561 main.go:141] libmachine: (old-k8s-version-933974) reserved static IP address 192.168.61.2 for domain old-k8s-version-933974
	I1216 11:48:42.765619  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | Getting to WaitForSSH function...
	I1216 11:48:42.765633  269561 main.go:141] libmachine: (old-k8s-version-933974) waiting for SSH...
	I1216 11:48:42.768117  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:42.768485  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:42.768509  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:42.768629  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | Using SSH client type: external
	I1216 11:48:42.768652  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | Using SSH private key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa (-rw-------)
	I1216 11:48:42.768689  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 11:48:42.768709  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | About to run SSH command:
	I1216 11:48:42.768754  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | exit 0
	I1216 11:48:42.889016  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | SSH cmd err, output: <nil>: 
	I1216 11:48:42.889265  269561 main.go:141] libmachine: (old-k8s-version-933974) KVM machine creation complete
	I1216 11:48:42.889679  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetConfigRaw
	I1216 11:48:42.890291  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:48:42.890530  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:48:42.890722  269561 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1216 11:48:42.890738  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetState
	I1216 11:48:42.892184  269561 main.go:141] libmachine: Detecting operating system of created instance...
	I1216 11:48:42.892198  269561 main.go:141] libmachine: Waiting for SSH to be available...
	I1216 11:48:42.892203  269561 main.go:141] libmachine: Getting to WaitForSSH function...
	I1216 11:48:42.892208  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:48:42.894692  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:42.895059  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:42.895087  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:42.895240  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:48:42.895405  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:42.895601  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:42.895793  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:48:42.895975  269561 main.go:141] libmachine: Using SSH client type: native
	I1216 11:48:42.896738  269561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1216 11:48:42.896766  269561 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1216 11:48:42.996252  269561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:48:42.996298  269561 main.go:141] libmachine: Detecting the provisioner...
	I1216 11:48:42.996313  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:48:42.999213  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:42.999594  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:42.999623  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:42.999823  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:48:43.000026  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:43.000172  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:43.000305  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:48:43.000488  269561 main.go:141] libmachine: Using SSH client type: native
	I1216 11:48:43.000672  269561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1216 11:48:43.000682  269561 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1216 11:48:43.097473  269561 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1216 11:48:43.097539  269561 main.go:141] libmachine: found compatible host: buildroot
	I1216 11:48:43.097549  269561 main.go:141] libmachine: Provisioning with buildroot...
	I1216 11:48:43.097559  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetMachineName
	I1216 11:48:43.097821  269561 buildroot.go:166] provisioning hostname "old-k8s-version-933974"
	I1216 11:48:43.097849  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetMachineName
	I1216 11:48:43.098077  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:48:43.100791  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.101224  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:43.101254  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.101434  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:48:43.101600  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:43.101743  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:43.101837  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:48:43.101970  269561 main.go:141] libmachine: Using SSH client type: native
	I1216 11:48:43.102143  269561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1216 11:48:43.102155  269561 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-933974 && echo "old-k8s-version-933974" | sudo tee /etc/hostname
	I1216 11:48:43.213714  269561 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-933974
	
	I1216 11:48:43.213747  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:48:43.216570  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.216897  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:43.216926  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.217142  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:48:43.217343  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:43.217490  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:43.217616  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:48:43.217773  269561 main.go:141] libmachine: Using SSH client type: native
	I1216 11:48:43.217962  269561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1216 11:48:43.217985  269561 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-933974' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-933974/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-933974' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 11:48:43.325286  269561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:48:43.325328  269561 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20107-210204/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-210204/.minikube}
	I1216 11:48:43.325356  269561 buildroot.go:174] setting up certificates
	I1216 11:48:43.325374  269561 provision.go:84] configureAuth start
	I1216 11:48:43.325385  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetMachineName
	I1216 11:48:43.325685  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetIP
	I1216 11:48:43.328657  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.329101  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:43.329128  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.329265  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:48:43.331364  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.331697  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:43.331726  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.331842  269561 provision.go:143] copyHostCerts
	I1216 11:48:43.331898  269561 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem, removing ...
	I1216 11:48:43.331908  269561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem
	I1216 11:48:43.331965  269561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem (1078 bytes)
	I1216 11:48:43.332069  269561 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem, removing ...
	I1216 11:48:43.332085  269561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem
	I1216 11:48:43.332107  269561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem (1123 bytes)
	I1216 11:48:43.332170  269561 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem, removing ...
	I1216 11:48:43.332177  269561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem
	I1216 11:48:43.332195  269561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem (1679 bytes)
	I1216 11:48:43.332254  269561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-933974 san=[127.0.0.1 192.168.61.2 localhost minikube old-k8s-version-933974]
	I1216 11:48:43.566921  269561 provision.go:177] copyRemoteCerts
	I1216 11:48:43.566980  269561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 11:48:43.567006  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:48:43.570644  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.571088  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:43.571120  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.571255  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:48:43.571461  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:43.571624  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:48:43.571790  269561 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa Username:docker}
	I1216 11:48:43.651192  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 11:48:43.674945  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 11:48:43.699773  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 11:48:43.724771  269561 provision.go:87] duration metric: took 399.381399ms to configureAuth
	I1216 11:48:43.724802  269561 buildroot.go:189] setting minikube options for container-runtime
	I1216 11:48:43.724970  269561 config.go:182] Loaded profile config "old-k8s-version-933974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 11:48:43.725055  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:48:43.727858  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.728180  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:43.728208  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:43.728423  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:48:43.728677  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:43.728941  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:43.729155  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:48:43.729400  269561 main.go:141] libmachine: Using SSH client type: native
	I1216 11:48:43.729572  269561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1216 11:48:43.729587  269561 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 11:48:44.188735  269561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 11:48:44.188764  269561 main.go:141] libmachine: Checking connection to Docker...
	I1216 11:48:44.188774  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetURL
	I1216 11:48:44.190145  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | using libvirt version 6000000
	I1216 11:48:44.192421  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.192788  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:44.192822  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.192972  269561 main.go:141] libmachine: Docker is up and running!
	I1216 11:48:44.192993  269561 main.go:141] libmachine: Reticulating splines...
	I1216 11:48:44.193003  269561 client.go:171] duration metric: took 25.214162555s to LocalClient.Create
	I1216 11:48:44.193033  269561 start.go:167] duration metric: took 25.214236241s to libmachine.API.Create "old-k8s-version-933974"
	I1216 11:48:44.193045  269561 start.go:293] postStartSetup for "old-k8s-version-933974" (driver="kvm2")
	I1216 11:48:44.193055  269561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 11:48:44.193080  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:48:44.193382  269561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 11:48:44.193407  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:48:44.195637  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.196020  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:44.196051  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.196184  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:48:44.196401  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:44.196562  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:48:44.196711  269561 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa Username:docker}
	I1216 11:48:44.276057  269561 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 11:48:44.280657  269561 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 11:48:44.280682  269561 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/addons for local assets ...
	I1216 11:48:44.280760  269561 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/files for local assets ...
	I1216 11:48:44.280849  269561 filesync.go:149] local asset: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem -> 2175192.pem in /etc/ssl/certs
	I1216 11:48:44.280988  269561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 11:48:44.291274  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:48:44.315060  269561 start.go:296] duration metric: took 121.997424ms for postStartSetup
	I1216 11:48:44.315132  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetConfigRaw
	I1216 11:48:44.315847  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetIP
	I1216 11:48:44.318447  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.318793  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:44.318815  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.319113  269561 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/config.json ...
	I1216 11:48:44.319350  269561 start.go:128] duration metric: took 25.365468529s to createHost
	I1216 11:48:44.319391  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:48:44.321791  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.322071  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:44.322114  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.322249  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:48:44.322474  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:44.322641  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:44.322856  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:48:44.322996  269561 main.go:141] libmachine: Using SSH client type: native
	I1216 11:48:44.323154  269561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1216 11:48:44.323171  269561 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 11:48:44.425689  269561 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734349724.401145293
	
	I1216 11:48:44.425718  269561 fix.go:216] guest clock: 1734349724.401145293
	I1216 11:48:44.425728  269561 fix.go:229] Guest: 2024-12-16 11:48:44.401145293 +0000 UTC Remote: 2024-12-16 11:48:44.319375574 +0000 UTC m=+66.534966804 (delta=81.769719ms)
	I1216 11:48:44.425763  269561 fix.go:200] guest clock delta is within tolerance: 81.769719ms
	I1216 11:48:44.425773  269561 start.go:83] releasing machines lock for "old-k8s-version-933974", held for 25.472052379s
	I1216 11:48:44.425805  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:48:44.426113  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetIP
	I1216 11:48:44.429140  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.429564  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:44.429594  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.429726  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:48:44.430225  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:48:44.430473  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:48:44.430578  269561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 11:48:44.430637  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:48:44.430708  269561 ssh_runner.go:195] Run: cat /version.json
	I1216 11:48:44.430730  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:48:44.433679  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.433979  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.434099  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:44.434134  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.434309  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:48:44.434392  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:44.434421  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:44.434468  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:44.434581  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:48:44.434697  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:48:44.434774  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:48:44.434861  269561 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa Username:docker}
	I1216 11:48:44.434958  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:48:44.435107  269561 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa Username:docker}
	I1216 11:48:44.533801  269561 ssh_runner.go:195] Run: systemctl --version
	I1216 11:48:44.540416  269561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 11:48:44.704617  269561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 11:48:44.711421  269561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 11:48:44.711488  269561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:48:44.727762  269561 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 11:48:44.727799  269561 start.go:495] detecting cgroup driver to use...
	I1216 11:48:44.727878  269561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 11:48:44.743638  269561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 11:48:44.758267  269561 docker.go:217] disabling cri-docker service (if available) ...
	I1216 11:48:44.758353  269561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 11:48:44.771818  269561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 11:48:44.785858  269561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 11:48:44.909488  269561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 11:48:45.053278  269561 docker.go:233] disabling docker service ...
	I1216 11:48:45.053350  269561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 11:48:45.069131  269561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 11:48:45.083958  269561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 11:48:45.230076  269561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 11:48:45.359782  269561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 11:48:45.373530  269561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 11:48:45.394101  269561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 11:48:45.394168  269561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:48:45.405488  269561 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 11:48:45.405574  269561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:48:45.416529  269561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:48:45.427247  269561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:48:45.437977  269561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 11:48:45.448385  269561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 11:48:45.458027  269561 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 11:48:45.458099  269561 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 11:48:45.472308  269561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 11:48:45.483243  269561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:48:45.617892  269561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 11:48:45.710392  269561 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 11:48:45.710477  269561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 11:48:45.715336  269561 start.go:563] Will wait 60s for crictl version
	I1216 11:48:45.715408  269561 ssh_runner.go:195] Run: which crictl
	I1216 11:48:45.719306  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 11:48:45.762964  269561 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 11:48:45.763061  269561 ssh_runner.go:195] Run: crio --version
	I1216 11:48:45.794789  269561 ssh_runner.go:195] Run: crio --version
	I1216 11:48:45.834569  269561 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 11:48:45.836003  269561 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetIP
	I1216 11:48:45.839509  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:45.839932  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:48:34 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:48:45.839962  269561 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:48:45.840181  269561 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 11:48:45.845052  269561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:48:45.859913  269561 kubeadm.go:883] updating cluster {Name:old-k8s-version-933974 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-933974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 11:48:45.860024  269561 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 11:48:45.860063  269561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:48:45.901862  269561 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 11:48:45.901933  269561 ssh_runner.go:195] Run: which lz4
	I1216 11:48:45.906865  269561 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 11:48:45.911963  269561 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 11:48:45.912007  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 11:48:47.455719  269561 crio.go:462] duration metric: took 1.548883748s to copy over tarball
	I1216 11:48:47.455820  269561 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 11:48:50.333095  269561 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.877233973s)
	I1216 11:48:50.333146  269561 crio.go:469] duration metric: took 2.877377109s to extract the tarball
	I1216 11:48:50.333158  269561 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 11:48:50.376949  269561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:48:50.422012  269561 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 11:48:50.422052  269561 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 11:48:50.422137  269561 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:48:50.422147  269561 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:48:50.422174  269561 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:48:50.422147  269561 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:48:50.422197  269561 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 11:48:50.422178  269561 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 11:48:50.422230  269561 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 11:48:50.422208  269561 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:48:50.424167  269561 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:48:50.424199  269561 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 11:48:50.424205  269561 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 11:48:50.424172  269561 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 11:48:50.424252  269561 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:48:50.424275  269561 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:48:50.424345  269561 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:48:50.424422  269561 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:48:50.580070  269561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 11:48:50.582795  269561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:48:50.590449  269561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:48:50.601217  269561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:48:50.608697  269561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 11:48:50.617259  269561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:48:50.619861  269561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 11:48:50.644868  269561 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 11:48:50.644926  269561 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 11:48:50.644994  269561 ssh_runner.go:195] Run: which crictl
	I1216 11:48:50.766502  269561 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 11:48:50.766557  269561 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:48:50.766612  269561 ssh_runner.go:195] Run: which crictl
	I1216 11:48:50.771857  269561 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 11:48:50.771914  269561 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:48:50.771967  269561 ssh_runner.go:195] Run: which crictl
	I1216 11:48:50.779107  269561 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 11:48:50.779161  269561 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:48:50.779166  269561 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 11:48:50.779203  269561 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 11:48:50.779216  269561 ssh_runner.go:195] Run: which crictl
	I1216 11:48:50.779245  269561 ssh_runner.go:195] Run: which crictl
	I1216 11:48:50.779278  269561 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 11:48:50.779312  269561 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:48:50.779339  269561 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 11:48:50.779354  269561 ssh_runner.go:195] Run: which crictl
	I1216 11:48:50.779373  269561 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 11:48:50.779419  269561 ssh_runner.go:195] Run: which crictl
	I1216 11:48:50.779476  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 11:48:50.779561  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:48:50.783516  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:48:50.783558  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 11:48:50.792620  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 11:48:50.792626  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:48:50.792626  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:48:50.922462  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:48:50.926171  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 11:48:50.930457  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:48:50.930926  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 11:48:50.949820  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:48:50.949820  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 11:48:50.949820  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:48:51.080436  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 11:48:51.080463  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 11:48:51.080442  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:48:51.080442  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:48:51.113785  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:48:51.113856  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 11:48:51.117530  269561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:48:51.218828  269561 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 11:48:51.218853  269561 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 11:48:51.225843  269561 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 11:48:51.225863  269561 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 11:48:51.244238  269561 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 11:48:51.244275  269561 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 11:48:51.247524  269561 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 11:48:51.424029  269561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:48:51.570996  269561 cache_images.go:92] duration metric: took 1.148919694s to LoadCachedImages
	W1216 11:48:51.571123  269561 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1216 11:48:51.571142  269561 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.20.0 crio true true} ...
	I1216 11:48:51.571308  269561 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-933974 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-933974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 11:48:51.571403  269561 ssh_runner.go:195] Run: crio config
	I1216 11:48:51.636403  269561 cni.go:84] Creating CNI manager for ""
	I1216 11:48:51.636434  269561 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:48:51.636447  269561 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 11:48:51.636472  269561 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-933974 NodeName:old-k8s-version-933974 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 11:48:51.636657  269561 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-933974"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 11:48:51.636741  269561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 11:48:51.647193  269561 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 11:48:51.647261  269561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 11:48:51.657405  269561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I1216 11:48:51.675929  269561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 11:48:51.694670  269561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I1216 11:48:51.714389  269561 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I1216 11:48:51.718684  269561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:48:51.731527  269561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:48:51.871902  269561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:48:51.891206  269561 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974 for IP: 192.168.61.2
	I1216 11:48:51.891240  269561 certs.go:194] generating shared ca certs ...
	I1216 11:48:51.891264  269561 certs.go:226] acquiring lock for ca certs: {Name:mkd97e0b0fea726104cb674caf81491f1144961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:48:51.891464  269561 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key
	I1216 11:48:51.891518  269561 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key
	I1216 11:48:51.891527  269561 certs.go:256] generating profile certs ...
	I1216 11:48:51.891609  269561 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/client.key
	I1216 11:48:51.891637  269561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/client.crt with IP's: []
	I1216 11:48:52.061893  269561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/client.crt ...
	I1216 11:48:52.061935  269561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/client.crt: {Name:mk0311f7c8e8eefac815e86ae25a88786ac41330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:48:52.062186  269561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/client.key ...
	I1216 11:48:52.062211  269561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/client.key: {Name:mkdddda643e7e058399f49ca7d10b23773d47293 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:48:52.062345  269561 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.key.52ddef80
	I1216 11:48:52.062373  269561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.crt.52ddef80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.2]
	I1216 11:48:52.137493  269561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.crt.52ddef80 ...
	I1216 11:48:52.137526  269561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.crt.52ddef80: {Name:mk10d73016b5d55472fab4cc91ac4c6b6a041c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:48:52.137740  269561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.key.52ddef80 ...
	I1216 11:48:52.137760  269561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.key.52ddef80: {Name:mk1b0a78a96e8962b3eff2c66200498b5f6b4248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:48:52.137862  269561 certs.go:381] copying /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.crt.52ddef80 -> /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.crt
	I1216 11:48:52.137969  269561 certs.go:385] copying /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.key.52ddef80 -> /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.key
	I1216 11:48:52.138049  269561 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/proxy-client.key
	I1216 11:48:52.138072  269561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/proxy-client.crt with IP's: []
	I1216 11:48:52.193649  269561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/proxy-client.crt ...
	I1216 11:48:52.193692  269561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/proxy-client.crt: {Name:mk29453a26c5cd95023de251d30c8131dae3e7fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:48:52.193889  269561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/proxy-client.key ...
	I1216 11:48:52.193910  269561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/proxy-client.key: {Name:mk93d2a8973cdd8bbb48af4fbe06430f8d2be4ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:48:52.194154  269561 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem (1338 bytes)
	W1216 11:48:52.194218  269561 certs.go:480] ignoring /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519_empty.pem, impossibly tiny 0 bytes
	I1216 11:48:52.194233  269561 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 11:48:52.194277  269561 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem (1078 bytes)
	I1216 11:48:52.194311  269561 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem (1123 bytes)
	I1216 11:48:52.194349  269561 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem (1679 bytes)
	I1216 11:48:52.194404  269561 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:48:52.194992  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 11:48:52.222434  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 11:48:52.247766  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 11:48:52.278381  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 11:48:52.302627  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 11:48:52.331225  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 11:48:52.356913  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 11:48:52.383054  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 11:48:52.412107  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 11:48:52.436986  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem --> /usr/share/ca-certificates/217519.pem (1338 bytes)
	I1216 11:48:52.461252  269561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /usr/share/ca-certificates/2175192.pem (1708 bytes)
	I1216 11:48:52.486087  269561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 11:48:52.502451  269561 ssh_runner.go:195] Run: openssl version
	I1216 11:48:52.509276  269561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/217519.pem && ln -fs /usr/share/ca-certificates/217519.pem /etc/ssl/certs/217519.pem"
	I1216 11:48:52.520303  269561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/217519.pem
	I1216 11:48:52.525187  269561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 10:42 /usr/share/ca-certificates/217519.pem
	I1216 11:48:52.525258  269561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/217519.pem
	I1216 11:48:52.531015  269561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/217519.pem /etc/ssl/certs/51391683.0"
	I1216 11:48:52.542536  269561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2175192.pem && ln -fs /usr/share/ca-certificates/2175192.pem /etc/ssl/certs/2175192.pem"
	I1216 11:48:52.554533  269561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2175192.pem
	I1216 11:48:52.559129  269561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 10:42 /usr/share/ca-certificates/2175192.pem
	I1216 11:48:52.559198  269561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2175192.pem
	I1216 11:48:52.564966  269561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2175192.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 11:48:52.575953  269561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 11:48:52.587400  269561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:48:52.592324  269561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:48:52.592399  269561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:48:52.598276  269561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 11:48:52.610651  269561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 11:48:52.615277  269561 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 11:48:52.615337  269561 kubeadm.go:392] StartCluster: {Name:old-k8s-version-933974 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-933974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:48:52.615417  269561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 11:48:52.615465  269561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:48:52.660098  269561 cri.go:89] found id: ""
	I1216 11:48:52.660177  269561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 11:48:52.671061  269561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:48:52.681333  269561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:48:52.694849  269561 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:48:52.694870  269561 kubeadm.go:157] found existing configuration files:
	
	I1216 11:48:52.694925  269561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:48:52.707758  269561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:48:52.707834  269561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:48:52.719655  269561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:48:52.731542  269561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:48:52.731635  269561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:48:52.743681  269561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:48:52.755486  269561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:48:52.755601  269561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:48:52.771509  269561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:48:52.786371  269561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:48:52.786446  269561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:48:52.799579  269561 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 11:48:52.952372  269561 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 11:48:52.952633  269561 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 11:48:53.130326  269561 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 11:48:53.130465  269561 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 11:48:53.130612  269561 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 11:48:53.364371  269561 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 11:48:53.365916  269561 out.go:235]   - Generating certificates and keys ...
	I1216 11:48:53.366042  269561 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 11:48:53.366174  269561 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 11:48:53.713492  269561 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 11:48:54.200904  269561 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 11:48:54.462543  269561 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 11:48:54.557916  269561 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 11:48:54.909167  269561 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 11:48:54.909443  269561 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-933974] and IPs [192.168.61.2 127.0.0.1 ::1]
	I1216 11:48:55.028358  269561 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 11:48:55.028595  269561 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-933974] and IPs [192.168.61.2 127.0.0.1 ::1]
	I1216 11:48:55.351799  269561 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 11:48:55.477191  269561 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 11:48:55.841958  269561 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 11:48:55.842061  269561 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 11:48:55.937246  269561 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 11:48:56.187549  269561 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 11:48:56.333805  269561 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 11:48:56.535516  269561 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 11:48:56.558174  269561 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 11:48:56.558615  269561 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 11:48:56.558697  269561 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 11:48:56.730978  269561 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 11:48:56.733449  269561 out.go:235]   - Booting up control plane ...
	I1216 11:48:56.733580  269561 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 11:48:56.745185  269561 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 11:48:56.746054  269561 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 11:48:56.746902  269561 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 11:48:56.751265  269561 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 11:49:36.745428  269561 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 11:49:36.745598  269561 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:49:36.745889  269561 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:49:41.746445  269561 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:49:41.747298  269561 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:49:51.746849  269561 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:49:51.747117  269561 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:50:11.746557  269561 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:50:11.746816  269561 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:50:51.747766  269561 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:50:51.748090  269561 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:50:51.748111  269561 kubeadm.go:310] 
	I1216 11:50:51.748170  269561 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 11:50:51.748258  269561 kubeadm.go:310] 		timed out waiting for the condition
	I1216 11:50:51.748287  269561 kubeadm.go:310] 
	I1216 11:50:51.748332  269561 kubeadm.go:310] 	This error is likely caused by:
	I1216 11:50:51.748362  269561 kubeadm.go:310] 		- The kubelet is not running
	I1216 11:50:51.748463  269561 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 11:50:51.748470  269561 kubeadm.go:310] 
	I1216 11:50:51.748554  269561 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 11:50:51.748590  269561 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 11:50:51.748622  269561 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 11:50:51.748629  269561 kubeadm.go:310] 
	I1216 11:50:51.748723  269561 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 11:50:51.748804  269561 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 11:50:51.748811  269561 kubeadm.go:310] 
	I1216 11:50:51.748893  269561 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 11:50:51.749027  269561 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 11:50:51.749122  269561 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 11:50:51.749223  269561 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 11:50:51.749235  269561 kubeadm.go:310] 
	I1216 11:50:51.749584  269561 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 11:50:51.749664  269561 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 11:50:51.749722  269561 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1216 11:50:51.749844  269561 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-933974] and IPs [192.168.61.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-933974] and IPs [192.168.61.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-933974] and IPs [192.168.61.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-933974] and IPs [192.168.61.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 11:50:51.749883  269561 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 11:50:52.214709  269561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:50:52.228655  269561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:50:52.237571  269561 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:50:52.237592  269561 kubeadm.go:157] found existing configuration files:
	
	I1216 11:50:52.237637  269561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:50:52.246153  269561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:50:52.246239  269561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:50:52.254997  269561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:50:52.263299  269561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:50:52.263368  269561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:50:52.272086  269561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:50:52.280686  269561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:50:52.280743  269561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:50:52.289337  269561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:50:52.297513  269561 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:50:52.297573  269561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:50:52.306286  269561 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 11:50:52.375626  269561 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 11:50:52.375686  269561 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 11:50:52.502547  269561 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 11:50:52.502713  269561 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 11:50:52.502842  269561 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 11:50:52.669140  269561 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 11:50:52.672084  269561 out.go:235]   - Generating certificates and keys ...
	I1216 11:50:52.672222  269561 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 11:50:52.672307  269561 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 11:50:52.672420  269561 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 11:50:52.672503  269561 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 11:50:52.672603  269561 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 11:50:52.672679  269561 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 11:50:52.672762  269561 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 11:50:52.672845  269561 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 11:50:52.673064  269561 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 11:50:52.673187  269561 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 11:50:52.673247  269561 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 11:50:52.673320  269561 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 11:50:52.804233  269561 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 11:50:52.914834  269561 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 11:50:52.991027  269561 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 11:50:53.316927  269561 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 11:50:53.330423  269561 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 11:50:53.331359  269561 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 11:50:53.331437  269561 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 11:50:53.457358  269561 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 11:50:53.459341  269561 out.go:235]   - Booting up control plane ...
	I1216 11:50:53.459471  269561 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 11:50:53.463208  269561 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 11:50:53.464727  269561 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 11:50:53.465448  269561 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 11:50:53.475156  269561 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 11:51:33.477876  269561 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 11:51:33.478098  269561 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:51:33.478338  269561 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:51:38.478718  269561 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:51:38.479016  269561 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:51:48.479360  269561 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:51:48.479653  269561 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:52:08.478217  269561 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:52:08.478544  269561 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:52:48.476890  269561 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 11:52:48.477205  269561 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 11:52:48.477233  269561 kubeadm.go:310] 
	I1216 11:52:48.477296  269561 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 11:52:48.477358  269561 kubeadm.go:310] 		timed out waiting for the condition
	I1216 11:52:48.477376  269561 kubeadm.go:310] 
	I1216 11:52:48.477428  269561 kubeadm.go:310] 	This error is likely caused by:
	I1216 11:52:48.477468  269561 kubeadm.go:310] 		- The kubelet is not running
	I1216 11:52:48.477618  269561 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 11:52:48.477645  269561 kubeadm.go:310] 
	I1216 11:52:48.477773  269561 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 11:52:48.477820  269561 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 11:52:48.477885  269561 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 11:52:48.477931  269561 kubeadm.go:310] 
	I1216 11:52:48.478075  269561 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 11:52:48.478210  269561 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 11:52:48.478230  269561 kubeadm.go:310] 
	I1216 11:52:48.478361  269561 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 11:52:48.478475  269561 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 11:52:48.478583  269561 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 11:52:48.478677  269561 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 11:52:48.478689  269561 kubeadm.go:310] 
	I1216 11:52:48.478971  269561 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 11:52:48.479099  269561 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 11:52:48.479196  269561 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 11:52:48.479295  269561 kubeadm.go:394] duration metric: took 3m55.863953499s to StartCluster
	I1216 11:52:48.479416  269561 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:52:48.479504  269561 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:52:48.513395  269561 cri.go:89] found id: ""
	I1216 11:52:48.513431  269561 logs.go:282] 0 containers: []
	W1216 11:52:48.513443  269561 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:52:48.513451  269561 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:52:48.513511  269561 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:52:48.547996  269561 cri.go:89] found id: ""
	I1216 11:52:48.548039  269561 logs.go:282] 0 containers: []
	W1216 11:52:48.548053  269561 logs.go:284] No container was found matching "etcd"
	I1216 11:52:48.548063  269561 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:52:48.548177  269561 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:52:48.581481  269561 cri.go:89] found id: ""
	I1216 11:52:48.581517  269561 logs.go:282] 0 containers: []
	W1216 11:52:48.581529  269561 logs.go:284] No container was found matching "coredns"
	I1216 11:52:48.581538  269561 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:52:48.581612  269561 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:52:48.615886  269561 cri.go:89] found id: ""
	I1216 11:52:48.615915  269561 logs.go:282] 0 containers: []
	W1216 11:52:48.615925  269561 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:52:48.615931  269561 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:52:48.615992  269561 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:52:48.654727  269561 cri.go:89] found id: ""
	I1216 11:52:48.654761  269561 logs.go:282] 0 containers: []
	W1216 11:52:48.654769  269561 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:52:48.654775  269561 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:52:48.654832  269561 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:52:48.688346  269561 cri.go:89] found id: ""
	I1216 11:52:48.688380  269561 logs.go:282] 0 containers: []
	W1216 11:52:48.688392  269561 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:52:48.688401  269561 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:52:48.688472  269561 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:52:48.722953  269561 cri.go:89] found id: ""
	I1216 11:52:48.722984  269561 logs.go:282] 0 containers: []
	W1216 11:52:48.722995  269561 logs.go:284] No container was found matching "kindnet"
	I1216 11:52:48.723009  269561 logs.go:123] Gathering logs for kubelet ...
	I1216 11:52:48.723026  269561 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:52:48.774058  269561 logs.go:123] Gathering logs for dmesg ...
	I1216 11:52:48.774098  269561 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:52:48.788646  269561 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:52:48.788693  269561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:52:48.903644  269561 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:52:48.903681  269561 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:52:48.903700  269561 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:52:49.015007  269561 logs.go:123] Gathering logs for container status ...
	I1216 11:52:49.015052  269561 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 11:52:49.070355  269561 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 11:52:49.070428  269561 out.go:270] * 
	* 
	W1216 11:52:49.070495  269561 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 11:52:49.070511  269561 out.go:270] * 
	* 
	W1216 11:52:49.071629  269561 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 11:52:49.075516  269561 out.go:201] 
	W1216 11:52:49.076813  269561 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 11:52:49.076870  269561 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 11:52:49.076902  269561 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 11:52:49.078362  269561 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-933974 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 6 (288.647178ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:52:49.430122  275904 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-933974" does not appear in /home/jenkins/minikube-integration/20107-210204/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-933974" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (311.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-933974 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-933974 create -f testdata/busybox.yaml: exit status 1 (66.13925ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-933974" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-933974 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 6 (309.925466ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:52:49.808783  275946 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-933974" does not appear in /home/jenkins/minikube-integration/20107-210204/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-933974" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 6 (267.163553ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:52:50.078058  275976 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-933974" does not appear in /home/jenkins/minikube-integration/20107-210204/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-933974" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-933974 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1216 11:52:53.163552  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:57.591457  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:10.816441  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:17.760947  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:29.962647  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:29.969065  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:29.980465  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:30.001926  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:30.043421  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:30.125001  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:30.286596  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:30.608299  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:31.249603  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:32.531202  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:34.125099  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:35.092622  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:40.213974  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:48.522783  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:50.456160  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:50.823101  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:50.829534  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:50.840930  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:50.862393  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:50.903836  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:50.985394  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:51.146974  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:51.469120  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:52.111244  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:53.392659  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:53:55.954635  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:54:01.076805  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:54:10.937702  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:54:11.319138  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:54:19.513247  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-933974 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m36.004794145s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-933974 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-933974 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-933974 describe deploy/metrics-server -n kube-system: exit status 1 (47.365192ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-933974" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-933974 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 6 (232.248512ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:54:26.364644  276432 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-933974" does not appear in /home/jenkins/minikube-integration/20107-210204/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-933974" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (96.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (508.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-933974 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1216 11:54:31.800665  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:54:39.682383  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:54:51.899702  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:54:52.223567  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:54:56.046590  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:55:12.762802  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:55:19.926719  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:55:26.955976  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:55:54.658207  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:56:06.844166  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:56:13.821240  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:56:34.684597  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:56:35.654292  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:56:55.823694  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-933974 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m26.930649411s)

                                                
                                                
-- stdout --
	* [old-k8s-version-933974] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-933974" primary control-plane node in "old-k8s-version-933974" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-933974" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:54:27.972229  276553 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:54:27.972478  276553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:54:27.972487  276553 out.go:358] Setting ErrFile to fd 2...
	I1216 11:54:27.972492  276553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:54:27.972689  276553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:54:27.973312  276553 out.go:352] Setting JSON to false
	I1216 11:54:27.974421  276553 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13015,"bootTime":1734337053,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 11:54:27.974529  276553 start.go:139] virtualization: kvm guest
	I1216 11:54:27.976923  276553 out.go:177] * [old-k8s-version-933974] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 11:54:27.978427  276553 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:54:27.978429  276553 notify.go:220] Checking for updates...
	I1216 11:54:27.981002  276553 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:54:27.982457  276553 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:54:27.983824  276553 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:54:27.985258  276553 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 11:54:27.986697  276553 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:54:27.988736  276553 config.go:182] Loaded profile config "old-k8s-version-933974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 11:54:27.989361  276553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:54:27.989415  276553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:54:28.006308  276553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I1216 11:54:28.006902  276553 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:54:28.007610  276553 main.go:141] libmachine: Using API Version  1
	I1216 11:54:28.007643  276553 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:54:28.007995  276553 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:54:28.008222  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:54:28.010141  276553 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1216 11:54:28.011416  276553 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:54:28.011838  276553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:54:28.011901  276553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:54:28.028948  276553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41049
	I1216 11:54:28.029615  276553 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:54:28.030140  276553 main.go:141] libmachine: Using API Version  1
	I1216 11:54:28.030164  276553 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:54:28.030519  276553 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:54:28.030842  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:54:28.068907  276553 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 11:54:28.070198  276553 start.go:297] selected driver: kvm2
	I1216 11:54:28.070218  276553 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-933974 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-933974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:54:28.070421  276553 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:54:28.071245  276553 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:54:28.071350  276553 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 11:54:28.089191  276553 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 11:54:28.089807  276553 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 11:54:28.089852  276553 cni.go:84] Creating CNI manager for ""
	I1216 11:54:28.089883  276553 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:54:28.089921  276553 start.go:340] cluster config:
	{Name:old-k8s-version-933974 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-933974 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:54:28.090022  276553 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:54:28.092048  276553 out.go:177] * Starting "old-k8s-version-933974" primary control-plane node in "old-k8s-version-933974" cluster
	I1216 11:54:28.093395  276553 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 11:54:28.093438  276553 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 11:54:28.093445  276553 cache.go:56] Caching tarball of preloaded images
	I1216 11:54:28.093536  276553 preload.go:172] Found /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 11:54:28.093548  276553 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1216 11:54:28.093697  276553 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/config.json ...
	I1216 11:54:28.093967  276553 start.go:360] acquireMachinesLock for old-k8s-version-933974: {Name:mk416695f38f04563a2ca155c5383ee4d8f0f97b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 11:54:28.094033  276553 start.go:364] duration metric: took 36.133µs to acquireMachinesLock for "old-k8s-version-933974"
	I1216 11:54:28.094057  276553 start.go:96] Skipping create...Using existing machine configuration
	I1216 11:54:28.094064  276553 fix.go:54] fixHost starting: 
	I1216 11:54:28.094362  276553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:54:28.094421  276553 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:54:28.111025  276553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I1216 11:54:28.111484  276553 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:54:28.112097  276553 main.go:141] libmachine: Using API Version  1
	I1216 11:54:28.112126  276553 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:54:28.112538  276553 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:54:28.112736  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:54:28.112852  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetState
	I1216 11:54:28.114778  276553 fix.go:112] recreateIfNeeded on old-k8s-version-933974: state=Stopped err=<nil>
	I1216 11:54:28.114824  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	W1216 11:54:28.115014  276553 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 11:54:28.116897  276553 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-933974" ...
	I1216 11:54:28.118330  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .Start
	I1216 11:54:28.118546  276553 main.go:141] libmachine: (old-k8s-version-933974) starting domain...
	I1216 11:54:28.118566  276553 main.go:141] libmachine: (old-k8s-version-933974) ensuring networks are active...
	I1216 11:54:28.119397  276553 main.go:141] libmachine: (old-k8s-version-933974) Ensuring network default is active
	I1216 11:54:28.119818  276553 main.go:141] libmachine: (old-k8s-version-933974) Ensuring network mk-old-k8s-version-933974 is active
	I1216 11:54:28.120302  276553 main.go:141] libmachine: (old-k8s-version-933974) getting domain XML...
	I1216 11:54:28.121140  276553 main.go:141] libmachine: (old-k8s-version-933974) creating domain...
	I1216 11:54:29.423624  276553 main.go:141] libmachine: (old-k8s-version-933974) waiting for IP...
	I1216 11:54:29.424479  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:29.424997  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:29.425165  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:29.425031  276589 retry.go:31] will retry after 194.376366ms: waiting for domain to come up
	I1216 11:54:29.621711  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:29.622261  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:29.622343  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:29.622227  276589 retry.go:31] will retry after 331.169392ms: waiting for domain to come up
	I1216 11:54:29.954901  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:29.955541  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:29.955560  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:29.955500  276589 retry.go:31] will retry after 402.623325ms: waiting for domain to come up
	I1216 11:54:30.360259  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:30.360836  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:30.360868  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:30.360793  276589 retry.go:31] will retry after 385.999098ms: waiting for domain to come up
	I1216 11:54:30.748507  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:30.749190  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:30.749220  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:30.749124  276589 retry.go:31] will retry after 634.497266ms: waiting for domain to come up
	I1216 11:54:31.385554  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:31.386014  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:31.386064  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:31.386007  276589 retry.go:31] will retry after 858.003803ms: waiting for domain to come up
	I1216 11:54:32.245903  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:32.246449  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:32.246474  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:32.246401  276589 retry.go:31] will retry after 802.600469ms: waiting for domain to come up
	I1216 11:54:33.050307  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:33.050799  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:33.050827  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:33.050760  276589 retry.go:31] will retry after 1.013464299s: waiting for domain to come up
	I1216 11:54:34.065369  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:34.065884  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:34.065915  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:34.065857  276589 retry.go:31] will retry after 1.786338994s: waiting for domain to come up
	I1216 11:54:35.854999  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:35.855412  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:35.855445  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:35.855366  276589 retry.go:31] will retry after 2.129929855s: waiting for domain to come up
	I1216 11:54:37.987418  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:37.988087  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:37.988140  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:37.988045  276589 retry.go:31] will retry after 2.85742734s: waiting for domain to come up
	I1216 11:54:40.847508  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:40.847988  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:40.848011  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:40.847941  276589 retry.go:31] will retry after 3.495149254s: waiting for domain to come up
	I1216 11:54:44.344839  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:44.345371  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | unable to find current IP address of domain old-k8s-version-933974 in network mk-old-k8s-version-933974
	I1216 11:54:44.345397  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | I1216 11:54:44.345329  276589 retry.go:31] will retry after 3.930569218s: waiting for domain to come up
	I1216 11:54:48.277137  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.277758  276553 main.go:141] libmachine: (old-k8s-version-933974) found domain IP: 192.168.61.2
	I1216 11:54:48.277778  276553 main.go:141] libmachine: (old-k8s-version-933974) reserving static IP address...
	I1216 11:54:48.277793  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has current primary IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.278257  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "old-k8s-version-933974", mac: "52:54:00:70:69:8b", ip: "192.168.61.2"} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:48.278290  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | skip adding static IP to network mk-old-k8s-version-933974 - found existing host DHCP lease matching {name: "old-k8s-version-933974", mac: "52:54:00:70:69:8b", ip: "192.168.61.2"}
	I1216 11:54:48.278310  276553 main.go:141] libmachine: (old-k8s-version-933974) reserved static IP address 192.168.61.2 for domain old-k8s-version-933974
	I1216 11:54:48.278330  276553 main.go:141] libmachine: (old-k8s-version-933974) waiting for SSH...
	I1216 11:54:48.278346  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | Getting to WaitForSSH function...
	I1216 11:54:48.281162  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.281489  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:48.281512  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.281684  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | Using SSH client type: external
	I1216 11:54:48.281706  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | Using SSH private key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa (-rw-------)
	I1216 11:54:48.281728  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 11:54:48.281737  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | About to run SSH command:
	I1216 11:54:48.281754  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | exit 0
	I1216 11:54:48.409317  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | SSH cmd err, output: <nil>: 
	I1216 11:54:48.409832  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetConfigRaw
	I1216 11:54:48.410516  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetIP
	I1216 11:54:48.413245  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.413687  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:48.413722  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.413960  276553 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/config.json ...
	I1216 11:54:48.414166  276553 machine.go:93] provisionDockerMachine start ...
	I1216 11:54:48.414192  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:54:48.414420  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:54:48.416989  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.417425  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:48.417448  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.417578  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:54:48.417757  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:48.417930  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:48.418123  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:54:48.418333  276553 main.go:141] libmachine: Using SSH client type: native
	I1216 11:54:48.418583  276553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1216 11:54:48.418596  276553 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 11:54:48.525233  276553 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 11:54:48.525266  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetMachineName
	I1216 11:54:48.525522  276553 buildroot.go:166] provisioning hostname "old-k8s-version-933974"
	I1216 11:54:48.525548  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetMachineName
	I1216 11:54:48.525708  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:54:48.528638  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.529033  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:48.529066  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.529217  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:54:48.529388  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:48.529529  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:48.529656  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:54:48.529807  276553 main.go:141] libmachine: Using SSH client type: native
	I1216 11:54:48.529978  276553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1216 11:54:48.530010  276553 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-933974 && echo "old-k8s-version-933974" | sudo tee /etc/hostname
	I1216 11:54:48.650793  276553 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-933974
	
	I1216 11:54:48.650827  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:54:48.653984  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.654374  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:48.654404  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.654707  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:54:48.654982  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:48.655179  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:48.655361  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:54:48.655563  276553 main.go:141] libmachine: Using SSH client type: native
	I1216 11:54:48.655803  276553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1216 11:54:48.655827  276553 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-933974' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-933974/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-933974' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 11:54:48.778438  276553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:54:48.778479  276553 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20107-210204/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-210204/.minikube}
	I1216 11:54:48.778522  276553 buildroot.go:174] setting up certificates
	I1216 11:54:48.778542  276553 provision.go:84] configureAuth start
	I1216 11:54:48.778558  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetMachineName
	I1216 11:54:48.778857  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetIP
	I1216 11:54:48.781522  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.781883  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:48.781909  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.782111  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:54:48.784577  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.784933  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:48.784987  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.785199  276553 provision.go:143] copyHostCerts
	I1216 11:54:48.785304  276553 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem, removing ...
	I1216 11:54:48.785319  276553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem
	I1216 11:54:48.785412  276553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem (1123 bytes)
	I1216 11:54:48.785558  276553 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem, removing ...
	I1216 11:54:48.785570  276553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem
	I1216 11:54:48.785609  276553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem (1679 bytes)
	I1216 11:54:48.785687  276553 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem, removing ...
	I1216 11:54:48.785697  276553 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem
	I1216 11:54:48.785732  276553 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem (1078 bytes)
	I1216 11:54:48.785801  276553 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-933974 san=[127.0.0.1 192.168.61.2 localhost minikube old-k8s-version-933974]
	I1216 11:54:48.944855  276553 provision.go:177] copyRemoteCerts
	I1216 11:54:48.944919  276553 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 11:54:48.944978  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:54:48.947842  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.948219  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:48.948254  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:48.948484  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:54:48.948685  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:48.948831  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:54:48.948982  276553 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa Username:docker}
	I1216 11:54:49.035061  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 11:54:49.062609  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1216 11:54:49.088386  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 11:54:49.113569  276553 provision.go:87] duration metric: took 335.012153ms to configureAuth
	I1216 11:54:49.113600  276553 buildroot.go:189] setting minikube options for container-runtime
	I1216 11:54:49.113776  276553 config.go:182] Loaded profile config "old-k8s-version-933974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1216 11:54:49.113878  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:54:49.116708  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.117007  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:49.117045  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.117163  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:54:49.117353  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:49.117554  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:49.117682  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:54:49.117837  276553 main.go:141] libmachine: Using SSH client type: native
	I1216 11:54:49.117997  276553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1216 11:54:49.118011  276553 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 11:54:49.343370  276553 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 11:54:49.343405  276553 machine.go:96] duration metric: took 929.22202ms to provisionDockerMachine
	I1216 11:54:49.343423  276553 start.go:293] postStartSetup for "old-k8s-version-933974" (driver="kvm2")
	I1216 11:54:49.343439  276553 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 11:54:49.343462  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:54:49.343839  276553 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 11:54:49.343874  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:54:49.347373  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.347844  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:49.347879  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.348072  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:54:49.348308  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:49.348495  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:54:49.348662  276553 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa Username:docker}
	I1216 11:54:49.437092  276553 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 11:54:49.442050  276553 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 11:54:49.442081  276553 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/addons for local assets ...
	I1216 11:54:49.442176  276553 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/files for local assets ...
	I1216 11:54:49.442268  276553 filesync.go:149] local asset: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem -> 2175192.pem in /etc/ssl/certs
	I1216 11:54:49.442384  276553 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 11:54:49.453416  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:54:49.478925  276553 start.go:296] duration metric: took 135.482497ms for postStartSetup
	I1216 11:54:49.478978  276553 fix.go:56] duration metric: took 21.384914189s for fixHost
	I1216 11:54:49.479001  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:54:49.482206  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.482697  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:49.482729  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.482952  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:54:49.483155  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:49.483313  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:49.483480  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:54:49.483653  276553 main.go:141] libmachine: Using SSH client type: native
	I1216 11:54:49.483853  276553 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I1216 11:54:49.483866  276553 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 11:54:49.597634  276553 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734350089.573652824
	
	I1216 11:54:49.597681  276553 fix.go:216] guest clock: 1734350089.573652824
	I1216 11:54:49.597692  276553 fix.go:229] Guest: 2024-12-16 11:54:49.573652824 +0000 UTC Remote: 2024-12-16 11:54:49.478982328 +0000 UTC m=+21.547619076 (delta=94.670496ms)
	I1216 11:54:49.597721  276553 fix.go:200] guest clock delta is within tolerance: 94.670496ms
	I1216 11:54:49.597731  276553 start.go:83] releasing machines lock for "old-k8s-version-933974", held for 21.503680456s
	I1216 11:54:49.597762  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:54:49.598033  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetIP
	I1216 11:54:49.601099  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.601539  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:49.601567  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.601816  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:54:49.602407  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:54:49.602630  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .DriverName
	I1216 11:54:49.602752  276553 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 11:54:49.602818  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:54:49.602875  276553 ssh_runner.go:195] Run: cat /version.json
	I1216 11:54:49.602905  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHHostname
	I1216 11:54:49.605609  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.605927  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:49.605956  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.605974  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.606110  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:54:49.606308  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:49.606437  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:49.606457  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:49.606489  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:54:49.606633  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHPort
	I1216 11:54:49.606656  276553 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa Username:docker}
	I1216 11:54:49.606776  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHKeyPath
	I1216 11:54:49.606927  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetSSHUsername
	I1216 11:54:49.607079  276553 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/old-k8s-version-933974/id_rsa Username:docker}
	I1216 11:54:49.711865  276553 ssh_runner.go:195] Run: systemctl --version
	I1216 11:54:49.717939  276553 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 11:54:49.865568  276553 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 11:54:49.871020  276553 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 11:54:49.871084  276553 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:54:49.887755  276553 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 11:54:49.887790  276553 start.go:495] detecting cgroup driver to use...
	I1216 11:54:49.887850  276553 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 11:54:49.904992  276553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 11:54:49.919437  276553 docker.go:217] disabling cri-docker service (if available) ...
	I1216 11:54:49.919513  276553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 11:54:49.934281  276553 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 11:54:49.950390  276553 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 11:54:50.081429  276553 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 11:54:50.240539  276553 docker.go:233] disabling docker service ...
	I1216 11:54:50.240606  276553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 11:54:50.257076  276553 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 11:54:50.270505  276553 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 11:54:50.392383  276553 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 11:54:50.515572  276553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 11:54:50.530947  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 11:54:50.551812  276553 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1216 11:54:50.551883  276553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:54:50.563222  276553 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 11:54:50.563305  276553 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:54:50.574272  276553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:54:50.585745  276553 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:54:50.599502  276553 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 11:54:50.611139  276553 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 11:54:50.620643  276553 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 11:54:50.620702  276553 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 11:54:50.635598  276553 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 11:54:50.645610  276553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:54:50.762873  276553 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 11:54:50.854662  276553 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 11:54:50.854750  276553 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 11:54:50.859440  276553 start.go:563] Will wait 60s for crictl version
	I1216 11:54:50.859523  276553 ssh_runner.go:195] Run: which crictl
	I1216 11:54:50.863045  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 11:54:50.905383  276553 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 11:54:50.905473  276553 ssh_runner.go:195] Run: crio --version
	I1216 11:54:50.934095  276553 ssh_runner.go:195] Run: crio --version
	I1216 11:54:50.967457  276553 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1216 11:54:50.968834  276553 main.go:141] libmachine: (old-k8s-version-933974) Calling .GetIP
	I1216 11:54:50.971931  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:50.972367  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:69:8b", ip: ""} in network mk-old-k8s-version-933974: {Iface:virbr2 ExpiryTime:2024-12-16 12:54:39 +0000 UTC Type:0 Mac:52:54:00:70:69:8b Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-933974 Clientid:01:52:54:00:70:69:8b}
	I1216 11:54:50.972402  276553 main.go:141] libmachine: (old-k8s-version-933974) DBG | domain old-k8s-version-933974 has defined IP address 192.168.61.2 and MAC address 52:54:00:70:69:8b in network mk-old-k8s-version-933974
	I1216 11:54:50.972799  276553 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1216 11:54:50.976917  276553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:54:50.992182  276553 kubeadm.go:883] updating cluster {Name:old-k8s-version-933974 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-933974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 11:54:50.992336  276553 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 11:54:50.992402  276553 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:54:51.042290  276553 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 11:54:51.042361  276553 ssh_runner.go:195] Run: which lz4
	I1216 11:54:51.046638  276553 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 11:54:51.050568  276553 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 11:54:51.050612  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1216 11:54:52.671660  276553 crio.go:462] duration metric: took 1.625052057s to copy over tarball
	I1216 11:54:52.671740  276553 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 11:54:55.633058  276553 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.961277133s)
	I1216 11:54:55.633099  276553 crio.go:469] duration metric: took 2.961410315s to extract the tarball
	I1216 11:54:55.633108  276553 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 11:54:55.676056  276553 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:54:55.708105  276553 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1216 11:54:55.708134  276553 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1216 11:54:55.708193  276553 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:54:55.708235  276553 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:54:55.708270  276553 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1216 11:54:55.708243  276553 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1216 11:54:55.708272  276553 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1216 11:54:55.708293  276553 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:54:55.708296  276553 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:54:55.708305  276553 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:54:55.710213  276553 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:54:55.710225  276553 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1216 11:54:55.710273  276553 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1216 11:54:55.710323  276553 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:54:55.710353  276553 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:54:55.710421  276553 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1216 11:54:55.710213  276553 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:54:55.710217  276553 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:54:55.861668  276553 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1216 11:54:55.863180  276553 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1216 11:54:55.865518  276553 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:54:55.869090  276553 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:54:55.871705  276553 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:54:55.880551  276553 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:54:55.883101  276553 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1216 11:54:55.955708  276553 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1216 11:54:55.955773  276553 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1216 11:54:55.955824  276553 ssh_runner.go:195] Run: which crictl
	I1216 11:54:56.043187  276553 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1216 11:54:56.043218  276553 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1216 11:54:56.043241  276553 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1216 11:54:56.043255  276553 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:54:56.043292  276553 ssh_runner.go:195] Run: which crictl
	I1216 11:54:56.043298  276553 ssh_runner.go:195] Run: which crictl
	I1216 11:54:56.055439  276553 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1216 11:54:56.055491  276553 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:54:56.055554  276553 ssh_runner.go:195] Run: which crictl
	I1216 11:54:56.061759  276553 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1216 11:54:56.061814  276553 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:54:56.061854  276553 ssh_runner.go:195] Run: which crictl
	I1216 11:54:56.061768  276553 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1216 11:54:56.061940  276553 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:54:56.061994  276553 ssh_runner.go:195] Run: which crictl
	I1216 11:54:56.071337  276553 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1216 11:54:56.071390  276553 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1216 11:54:56.071431  276553 ssh_runner.go:195] Run: which crictl
	I1216 11:54:56.071431  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 11:54:56.071433  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:54:56.071463  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 11:54:56.071488  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:54:56.071499  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:54:56.071545  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:54:56.175201  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:54:56.175224  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 11:54:56.202400  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:54:56.202452  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:54:56.203803  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 11:54:56.203920  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 11:54:56.203937  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:54:56.324327  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1216 11:54:56.324330  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 11:54:56.354994  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1216 11:54:56.355045  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1216 11:54:56.355086  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1216 11:54:56.355129  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1216 11:54:56.359027  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1216 11:54:56.447402  276553 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1216 11:54:56.447408  276553 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1216 11:54:56.501968  276553 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1216 11:54:56.501994  276553 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1216 11:54:56.502040  276553 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1216 11:54:56.502195  276553 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1216 11:54:56.521086  276553 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1216 11:54:56.533869  276553 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1216 11:54:56.715077  276553 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:54:56.857682  276553 cache_images.go:92] duration metric: took 1.14952732s to LoadCachedImages
	W1216 11:54:56.857790  276553 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20107-210204/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1216 11:54:56.857809  276553 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.20.0 crio true true} ...
	I1216 11:54:56.857918  276553 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-933974 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-933974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 11:54:56.857990  276553 ssh_runner.go:195] Run: crio config
	I1216 11:54:56.908425  276553 cni.go:84] Creating CNI manager for ""
	I1216 11:54:56.908459  276553 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:54:56.908473  276553 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 11:54:56.908499  276553 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-933974 NodeName:old-k8s-version-933974 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1216 11:54:56.908638  276553 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-933974"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 11:54:56.908729  276553 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1216 11:54:56.918632  276553 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 11:54:56.918719  276553 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 11:54:56.927947  276553 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I1216 11:54:56.945609  276553 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 11:54:56.962638  276553 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I1216 11:54:56.979809  276553 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I1216 11:54:56.983838  276553 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:54:56.998023  276553 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:54:57.110624  276553 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:54:57.127399  276553 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974 for IP: 192.168.61.2
	I1216 11:54:57.127431  276553 certs.go:194] generating shared ca certs ...
	I1216 11:54:57.127453  276553 certs.go:226] acquiring lock for ca certs: {Name:mkd97e0b0fea726104cb674caf81491f1144961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:54:57.127646  276553 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key
	I1216 11:54:57.127690  276553 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key
	I1216 11:54:57.127700  276553 certs.go:256] generating profile certs ...
	I1216 11:54:57.127788  276553 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/client.key
	I1216 11:54:57.127835  276553 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.key.52ddef80
	I1216 11:54:57.127874  276553 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/proxy-client.key
	I1216 11:54:57.127988  276553 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem (1338 bytes)
	W1216 11:54:57.128029  276553 certs.go:480] ignoring /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519_empty.pem, impossibly tiny 0 bytes
	I1216 11:54:57.128039  276553 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 11:54:57.128069  276553 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem (1078 bytes)
	I1216 11:54:57.128100  276553 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem (1123 bytes)
	I1216 11:54:57.128137  276553 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem (1679 bytes)
	I1216 11:54:57.128193  276553 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:54:57.128836  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 11:54:57.173363  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 11:54:57.219437  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 11:54:57.260649  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 11:54:57.297447  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1216 11:54:57.335140  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 11:54:57.363434  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 11:54:57.389286  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/old-k8s-version-933974/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 11:54:57.416226  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem --> /usr/share/ca-certificates/217519.pem (1338 bytes)
	I1216 11:54:57.440335  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /usr/share/ca-certificates/2175192.pem (1708 bytes)
	I1216 11:54:57.465204  276553 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 11:54:57.489760  276553 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 11:54:57.511387  276553 ssh_runner.go:195] Run: openssl version
	I1216 11:54:57.517464  276553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2175192.pem && ln -fs /usr/share/ca-certificates/2175192.pem /etc/ssl/certs/2175192.pem"
	I1216 11:54:57.529106  276553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2175192.pem
	I1216 11:54:57.535315  276553 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 10:42 /usr/share/ca-certificates/2175192.pem
	I1216 11:54:57.535399  276553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2175192.pem
	I1216 11:54:57.541548  276553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2175192.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 11:54:57.553132  276553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 11:54:57.564282  276553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:54:57.569100  276553 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:54:57.569178  276553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:54:57.575036  276553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 11:54:57.586339  276553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/217519.pem && ln -fs /usr/share/ca-certificates/217519.pem /etc/ssl/certs/217519.pem"
	I1216 11:54:57.597865  276553 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/217519.pem
	I1216 11:54:57.602845  276553 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 10:42 /usr/share/ca-certificates/217519.pem
	I1216 11:54:57.602937  276553 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/217519.pem
	I1216 11:54:57.609337  276553 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/217519.pem /etc/ssl/certs/51391683.0"
	I1216 11:54:57.621531  276553 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 11:54:57.626871  276553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 11:54:57.634065  276553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 11:54:57.640399  276553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 11:54:57.647131  276553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 11:54:57.653674  276553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 11:54:57.660980  276553 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 11:54:57.668882  276553 kubeadm.go:392] StartCluster: {Name:old-k8s-version-933974 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-933974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:54:57.669014  276553 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 11:54:57.669124  276553 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:54:57.706034  276553 cri.go:89] found id: ""
	I1216 11:54:57.706121  276553 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 11:54:57.716337  276553 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 11:54:57.716366  276553 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 11:54:57.716423  276553 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 11:54:57.726699  276553 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 11:54:57.727739  276553 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-933974" does not appear in /home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:54:57.728370  276553 kubeconfig.go:62] /home/jenkins/minikube-integration/20107-210204/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-933974" cluster setting kubeconfig missing "old-k8s-version-933974" context setting]
	I1216 11:54:57.729302  276553 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/kubeconfig: {Name:mk57b3cf865454cf7d68479ab0b998d52a5c91e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:54:57.759102  276553 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 11:54:57.769367  276553 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I1216 11:54:57.769424  276553 kubeadm.go:1160] stopping kube-system containers ...
	I1216 11:54:57.769442  276553 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 11:54:57.769506  276553 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:54:57.807286  276553 cri.go:89] found id: ""
	I1216 11:54:57.807357  276553 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 11:54:57.825464  276553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:54:57.836010  276553 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:54:57.836038  276553 kubeadm.go:157] found existing configuration files:
	
	I1216 11:54:57.836101  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:54:57.845856  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:54:57.845927  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:54:57.855796  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:54:57.865130  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:54:57.865207  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:54:57.874953  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:54:57.884477  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:54:57.884540  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:54:57.895165  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:54:57.905616  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:54:57.905692  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:54:57.915596  276553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:54:57.926141  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:54:58.050016  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:54:58.965697  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:54:59.183365  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:54:59.296863  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:54:59.390267  276553 api_server.go:52] waiting for apiserver process to appear ...
	I1216 11:54:59.390366  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:54:59.891459  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:00.391015  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:00.890560  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:01.391107  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:01.890729  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:02.391346  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:02.891250  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:03.391289  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:03.891211  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:04.391363  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:04.890455  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:05.391361  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:05.891262  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:06.391216  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:06.891322  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:07.390504  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:07.891417  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:08.391261  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:08.891185  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:09.390752  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:09.891354  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:10.390919  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:10.890710  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:11.390454  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:11.891275  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:12.390873  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:12.890907  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:13.390409  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:13.891161  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:14.390759  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:14.891254  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:15.390482  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:15.891348  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:16.391398  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:16.890978  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:17.390549  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:17.891001  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:18.390697  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:18.891334  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:19.390447  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:19.891333  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:20.390565  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:20.891178  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:21.390669  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:21.890766  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:22.390527  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:22.890975  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:23.390959  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:23.890712  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:24.390920  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:24.891306  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:25.391102  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:25.890486  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:26.390595  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:26.890515  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:27.391261  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:27.890952  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:28.391384  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:28.890459  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:29.390512  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:29.891135  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:30.390543  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:30.890612  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:31.390720  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:31.890468  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:32.391124  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:32.890889  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:33.391273  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:33.890763  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:34.390638  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:34.891002  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:35.391126  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:35.890753  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:36.391178  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:36.890786  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:37.390458  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:37.890882  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:38.391040  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:38.891141  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:39.391459  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:39.890520  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:40.390456  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:40.890498  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:41.390442  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:41.890644  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:42.390478  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:42.890764  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:43.390579  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:43.890683  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:44.390424  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:44.890623  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:45.390872  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:45.890456  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:46.390549  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:46.890588  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:47.391503  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:47.890927  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:48.391346  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:48.890982  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:49.390661  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:49.890717  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:50.391403  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:50.891230  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:51.391139  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:51.890744  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:52.391210  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:52.891073  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:53.391101  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:53.891419  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:54.390756  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:54.891406  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:55.391501  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:55.890835  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:56.391079  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:56.890576  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:57.391171  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:57.891059  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:58.390702  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:58.891129  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:55:59.390733  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:55:59.390832  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:55:59.425264  276553 cri.go:89] found id: ""
	I1216 11:55:59.425313  276553 logs.go:282] 0 containers: []
	W1216 11:55:59.425326  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:55:59.425335  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:55:59.425398  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:55:59.459190  276553 cri.go:89] found id: ""
	I1216 11:55:59.459222  276553 logs.go:282] 0 containers: []
	W1216 11:55:59.459234  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:55:59.459241  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:55:59.459311  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:55:59.490932  276553 cri.go:89] found id: ""
	I1216 11:55:59.490963  276553 logs.go:282] 0 containers: []
	W1216 11:55:59.490973  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:55:59.490979  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:55:59.491041  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:55:59.527026  276553 cri.go:89] found id: ""
	I1216 11:55:59.527057  276553 logs.go:282] 0 containers: []
	W1216 11:55:59.527065  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:55:59.527071  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:55:59.527120  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:55:59.562465  276553 cri.go:89] found id: ""
	I1216 11:55:59.562494  276553 logs.go:282] 0 containers: []
	W1216 11:55:59.562503  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:55:59.562509  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:55:59.562578  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:55:59.594806  276553 cri.go:89] found id: ""
	I1216 11:55:59.594842  276553 logs.go:282] 0 containers: []
	W1216 11:55:59.594855  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:55:59.594863  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:55:59.594936  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:55:59.632718  276553 cri.go:89] found id: ""
	I1216 11:55:59.632752  276553 logs.go:282] 0 containers: []
	W1216 11:55:59.632763  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:55:59.632769  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:55:59.632833  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:55:59.664079  276553 cri.go:89] found id: ""
	I1216 11:55:59.664108  276553 logs.go:282] 0 containers: []
	W1216 11:55:59.664117  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:55:59.664126  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:55:59.664139  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:55:59.714265  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:55:59.714305  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:55:59.727955  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:55:59.727993  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:55:59.842091  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:55:59.842120  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:55:59.842134  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:55:59.914225  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:55:59.914269  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:02.457462  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:02.469895  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:02.469965  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:02.503650  276553 cri.go:89] found id: ""
	I1216 11:56:02.503680  276553 logs.go:282] 0 containers: []
	W1216 11:56:02.503689  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:02.503696  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:02.503748  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:02.536420  276553 cri.go:89] found id: ""
	I1216 11:56:02.536453  276553 logs.go:282] 0 containers: []
	W1216 11:56:02.536463  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:02.536471  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:02.536523  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:02.571858  276553 cri.go:89] found id: ""
	I1216 11:56:02.571892  276553 logs.go:282] 0 containers: []
	W1216 11:56:02.571900  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:02.571906  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:02.571956  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:02.607228  276553 cri.go:89] found id: ""
	I1216 11:56:02.607263  276553 logs.go:282] 0 containers: []
	W1216 11:56:02.607275  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:02.607283  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:02.607345  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:02.648897  276553 cri.go:89] found id: ""
	I1216 11:56:02.648924  276553 logs.go:282] 0 containers: []
	W1216 11:56:02.648932  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:02.648939  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:02.649007  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:02.692297  276553 cri.go:89] found id: ""
	I1216 11:56:02.692335  276553 logs.go:282] 0 containers: []
	W1216 11:56:02.692356  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:02.692365  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:02.692431  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:02.745786  276553 cri.go:89] found id: ""
	I1216 11:56:02.745828  276553 logs.go:282] 0 containers: []
	W1216 11:56:02.745841  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:02.745849  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:02.745914  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:02.778357  276553 cri.go:89] found id: ""
	I1216 11:56:02.778387  276553 logs.go:282] 0 containers: []
	W1216 11:56:02.778396  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:02.778406  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:02.778419  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:02.826180  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:02.826223  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:02.839105  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:02.839138  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:02.913948  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:02.913975  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:02.913993  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:02.990936  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:02.990979  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:05.530546  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:05.544429  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:05.544503  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:05.581077  276553 cri.go:89] found id: ""
	I1216 11:56:05.581106  276553 logs.go:282] 0 containers: []
	W1216 11:56:05.581114  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:05.581131  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:05.581183  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:05.615640  276553 cri.go:89] found id: ""
	I1216 11:56:05.615673  276553 logs.go:282] 0 containers: []
	W1216 11:56:05.615684  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:05.615691  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:05.615756  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:05.649215  276553 cri.go:89] found id: ""
	I1216 11:56:05.649252  276553 logs.go:282] 0 containers: []
	W1216 11:56:05.649263  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:05.649271  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:05.649337  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:05.681869  276553 cri.go:89] found id: ""
	I1216 11:56:05.681906  276553 logs.go:282] 0 containers: []
	W1216 11:56:05.681917  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:05.681924  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:05.681991  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:05.716968  276553 cri.go:89] found id: ""
	I1216 11:56:05.717007  276553 logs.go:282] 0 containers: []
	W1216 11:56:05.717018  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:05.717025  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:05.717082  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:05.751098  276553 cri.go:89] found id: ""
	I1216 11:56:05.751128  276553 logs.go:282] 0 containers: []
	W1216 11:56:05.751141  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:05.751148  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:05.751210  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:05.788336  276553 cri.go:89] found id: ""
	I1216 11:56:05.788385  276553 logs.go:282] 0 containers: []
	W1216 11:56:05.788398  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:05.788406  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:05.788474  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:05.829930  276553 cri.go:89] found id: ""
	I1216 11:56:05.829966  276553 logs.go:282] 0 containers: []
	W1216 11:56:05.829978  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:05.829991  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:05.830006  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:05.880680  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:05.880722  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:05.895889  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:05.895919  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:05.965486  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:05.965527  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:05.965556  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:06.050591  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:06.050638  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:08.591646  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:08.604933  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:08.605032  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:08.637250  276553 cri.go:89] found id: ""
	I1216 11:56:08.637286  276553 logs.go:282] 0 containers: []
	W1216 11:56:08.637299  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:08.637307  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:08.637375  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:08.670387  276553 cri.go:89] found id: ""
	I1216 11:56:08.670412  276553 logs.go:282] 0 containers: []
	W1216 11:56:08.670421  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:08.670426  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:08.670477  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:08.704069  276553 cri.go:89] found id: ""
	I1216 11:56:08.704104  276553 logs.go:282] 0 containers: []
	W1216 11:56:08.704116  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:08.704123  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:08.704190  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:08.739786  276553 cri.go:89] found id: ""
	I1216 11:56:08.739815  276553 logs.go:282] 0 containers: []
	W1216 11:56:08.739824  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:08.739831  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:08.739890  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:08.773017  276553 cri.go:89] found id: ""
	I1216 11:56:08.773047  276553 logs.go:282] 0 containers: []
	W1216 11:56:08.773055  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:08.773061  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:08.773114  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:08.807141  276553 cri.go:89] found id: ""
	I1216 11:56:08.807172  276553 logs.go:282] 0 containers: []
	W1216 11:56:08.807181  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:08.807187  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:08.807239  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:08.839527  276553 cri.go:89] found id: ""
	I1216 11:56:08.839552  276553 logs.go:282] 0 containers: []
	W1216 11:56:08.839560  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:08.839565  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:08.839631  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:08.870481  276553 cri.go:89] found id: ""
	I1216 11:56:08.870507  276553 logs.go:282] 0 containers: []
	W1216 11:56:08.870516  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:08.870525  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:08.870538  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:08.906859  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:08.906900  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:08.958234  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:08.958276  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:08.971882  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:08.971913  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:09.042606  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:09.042636  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:09.042657  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:11.621001  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:11.633695  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:11.633773  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:11.666126  276553 cri.go:89] found id: ""
	I1216 11:56:11.666159  276553 logs.go:282] 0 containers: []
	W1216 11:56:11.666169  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:11.666177  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:11.666239  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:11.701989  276553 cri.go:89] found id: ""
	I1216 11:56:11.702020  276553 logs.go:282] 0 containers: []
	W1216 11:56:11.702028  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:11.702035  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:11.702087  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:11.734285  276553 cri.go:89] found id: ""
	I1216 11:56:11.734314  276553 logs.go:282] 0 containers: []
	W1216 11:56:11.734322  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:11.734328  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:11.734382  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:11.766514  276553 cri.go:89] found id: ""
	I1216 11:56:11.766573  276553 logs.go:282] 0 containers: []
	W1216 11:56:11.766586  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:11.766598  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:11.766670  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:11.800665  276553 cri.go:89] found id: ""
	I1216 11:56:11.800698  276553 logs.go:282] 0 containers: []
	W1216 11:56:11.800710  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:11.800718  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:11.800785  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:11.835742  276553 cri.go:89] found id: ""
	I1216 11:56:11.835775  276553 logs.go:282] 0 containers: []
	W1216 11:56:11.835786  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:11.835794  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:11.835860  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:11.869435  276553 cri.go:89] found id: ""
	I1216 11:56:11.869469  276553 logs.go:282] 0 containers: []
	W1216 11:56:11.869481  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:11.869489  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:11.869556  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:11.902644  276553 cri.go:89] found id: ""
	I1216 11:56:11.902692  276553 logs.go:282] 0 containers: []
	W1216 11:56:11.902703  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:11.902714  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:11.902727  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:11.980372  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:11.980417  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:12.019685  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:12.019720  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:12.074464  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:12.074503  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:12.088017  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:12.088048  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:12.163375  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:14.663765  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:14.677562  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:14.677626  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:14.715117  276553 cri.go:89] found id: ""
	I1216 11:56:14.715153  276553 logs.go:282] 0 containers: []
	W1216 11:56:14.715165  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:14.715173  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:14.715239  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:14.752644  276553 cri.go:89] found id: ""
	I1216 11:56:14.752674  276553 logs.go:282] 0 containers: []
	W1216 11:56:14.752683  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:14.752688  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:14.752741  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:14.792028  276553 cri.go:89] found id: ""
	I1216 11:56:14.792058  276553 logs.go:282] 0 containers: []
	W1216 11:56:14.792067  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:14.792073  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:14.792123  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:14.832531  276553 cri.go:89] found id: ""
	I1216 11:56:14.832558  276553 logs.go:282] 0 containers: []
	W1216 11:56:14.832566  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:14.832571  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:14.832634  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:14.869122  276553 cri.go:89] found id: ""
	I1216 11:56:14.869160  276553 logs.go:282] 0 containers: []
	W1216 11:56:14.869173  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:14.869180  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:14.869242  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:14.902211  276553 cri.go:89] found id: ""
	I1216 11:56:14.902241  276553 logs.go:282] 0 containers: []
	W1216 11:56:14.902252  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:14.902262  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:14.902334  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:14.936781  276553 cri.go:89] found id: ""
	I1216 11:56:14.936818  276553 logs.go:282] 0 containers: []
	W1216 11:56:14.936830  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:14.936838  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:14.936939  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:14.971920  276553 cri.go:89] found id: ""
	I1216 11:56:14.971957  276553 logs.go:282] 0 containers: []
	W1216 11:56:14.971968  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:14.971980  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:14.971993  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:15.009758  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:15.009791  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:15.064989  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:15.065032  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:15.078843  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:15.078878  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:15.152676  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:15.152699  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:15.152712  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:17.730165  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:17.743534  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:17.743606  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:17.776343  276553 cri.go:89] found id: ""
	I1216 11:56:17.776381  276553 logs.go:282] 0 containers: []
	W1216 11:56:17.776394  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:17.776402  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:17.776468  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:17.813882  276553 cri.go:89] found id: ""
	I1216 11:56:17.813922  276553 logs.go:282] 0 containers: []
	W1216 11:56:17.813935  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:17.813944  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:17.814009  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:17.848087  276553 cri.go:89] found id: ""
	I1216 11:56:17.848117  276553 logs.go:282] 0 containers: []
	W1216 11:56:17.848126  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:17.848132  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:17.848183  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:17.883393  276553 cri.go:89] found id: ""
	I1216 11:56:17.883424  276553 logs.go:282] 0 containers: []
	W1216 11:56:17.883432  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:17.883438  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:17.883489  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:17.915489  276553 cri.go:89] found id: ""
	I1216 11:56:17.915520  276553 logs.go:282] 0 containers: []
	W1216 11:56:17.915528  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:17.915534  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:17.915586  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:17.949039  276553 cri.go:89] found id: ""
	I1216 11:56:17.949070  276553 logs.go:282] 0 containers: []
	W1216 11:56:17.949079  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:17.949085  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:17.949137  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:17.983693  276553 cri.go:89] found id: ""
	I1216 11:56:17.983725  276553 logs.go:282] 0 containers: []
	W1216 11:56:17.983733  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:17.983741  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:17.983796  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:18.020654  276553 cri.go:89] found id: ""
	I1216 11:56:18.020687  276553 logs.go:282] 0 containers: []
	W1216 11:56:18.020699  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:18.020712  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:18.020729  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:18.076667  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:18.076719  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:18.090016  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:18.090045  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:18.161708  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:18.161742  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:18.161761  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:18.242824  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:18.242871  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:20.779242  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:20.792037  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:20.792105  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:20.824142  276553 cri.go:89] found id: ""
	I1216 11:56:20.824175  276553 logs.go:282] 0 containers: []
	W1216 11:56:20.824183  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:20.824190  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:20.824255  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:20.861031  276553 cri.go:89] found id: ""
	I1216 11:56:20.861066  276553 logs.go:282] 0 containers: []
	W1216 11:56:20.861075  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:20.861081  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:20.861141  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:20.898059  276553 cri.go:89] found id: ""
	I1216 11:56:20.898092  276553 logs.go:282] 0 containers: []
	W1216 11:56:20.898100  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:20.898106  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:20.898173  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:20.935632  276553 cri.go:89] found id: ""
	I1216 11:56:20.935669  276553 logs.go:282] 0 containers: []
	W1216 11:56:20.935681  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:20.935690  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:20.935760  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:20.974422  276553 cri.go:89] found id: ""
	I1216 11:56:20.974457  276553 logs.go:282] 0 containers: []
	W1216 11:56:20.974469  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:20.974477  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:20.974542  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:21.007898  276553 cri.go:89] found id: ""
	I1216 11:56:21.007936  276553 logs.go:282] 0 containers: []
	W1216 11:56:21.007945  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:21.007952  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:21.008006  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:21.045641  276553 cri.go:89] found id: ""
	I1216 11:56:21.045674  276553 logs.go:282] 0 containers: []
	W1216 11:56:21.045685  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:21.045693  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:21.045759  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:21.081139  276553 cri.go:89] found id: ""
	I1216 11:56:21.081172  276553 logs.go:282] 0 containers: []
	W1216 11:56:21.081181  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:21.081191  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:21.081203  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:21.157356  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:21.157403  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:21.196462  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:21.196497  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:21.247277  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:21.247318  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:21.261061  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:21.261093  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:21.332489  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:23.833497  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:23.846955  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:23.847021  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:23.881796  276553 cri.go:89] found id: ""
	I1216 11:56:23.881828  276553 logs.go:282] 0 containers: []
	W1216 11:56:23.881837  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:23.881843  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:23.881905  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:23.918486  276553 cri.go:89] found id: ""
	I1216 11:56:23.918534  276553 logs.go:282] 0 containers: []
	W1216 11:56:23.918545  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:23.918551  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:23.918609  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:23.954397  276553 cri.go:89] found id: ""
	I1216 11:56:23.954433  276553 logs.go:282] 0 containers: []
	W1216 11:56:23.954444  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:23.954451  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:23.954513  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:23.988411  276553 cri.go:89] found id: ""
	I1216 11:56:23.988443  276553 logs.go:282] 0 containers: []
	W1216 11:56:23.988451  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:23.988457  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:23.988513  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:24.024681  276553 cri.go:89] found id: ""
	I1216 11:56:24.024717  276553 logs.go:282] 0 containers: []
	W1216 11:56:24.024732  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:24.024740  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:24.024814  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:24.059824  276553 cri.go:89] found id: ""
	I1216 11:56:24.059865  276553 logs.go:282] 0 containers: []
	W1216 11:56:24.059878  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:24.059887  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:24.059960  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:24.096221  276553 cri.go:89] found id: ""
	I1216 11:56:24.096263  276553 logs.go:282] 0 containers: []
	W1216 11:56:24.096274  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:24.096281  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:24.096341  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:24.129564  276553 cri.go:89] found id: ""
	I1216 11:56:24.129601  276553 logs.go:282] 0 containers: []
	W1216 11:56:24.129613  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:24.129627  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:24.129645  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:24.168934  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:24.168990  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:24.222215  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:24.222256  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:24.237914  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:24.237952  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:24.317226  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:24.317255  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:24.317272  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:26.899564  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:26.912654  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:26.912729  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:26.949406  276553 cri.go:89] found id: ""
	I1216 11:56:26.949444  276553 logs.go:282] 0 containers: []
	W1216 11:56:26.949457  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:26.949465  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:26.949540  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:26.983926  276553 cri.go:89] found id: ""
	I1216 11:56:26.983962  276553 logs.go:282] 0 containers: []
	W1216 11:56:26.983975  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:26.983983  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:26.984048  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:27.023748  276553 cri.go:89] found id: ""
	I1216 11:56:27.023798  276553 logs.go:282] 0 containers: []
	W1216 11:56:27.023820  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:27.023829  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:27.023898  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:27.062642  276553 cri.go:89] found id: ""
	I1216 11:56:27.062677  276553 logs.go:282] 0 containers: []
	W1216 11:56:27.062688  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:27.062697  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:27.062775  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:27.102371  276553 cri.go:89] found id: ""
	I1216 11:56:27.102407  276553 logs.go:282] 0 containers: []
	W1216 11:56:27.102420  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:27.102429  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:27.102499  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:27.138714  276553 cri.go:89] found id: ""
	I1216 11:56:27.138751  276553 logs.go:282] 0 containers: []
	W1216 11:56:27.138760  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:27.138767  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:27.138832  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:27.173530  276553 cri.go:89] found id: ""
	I1216 11:56:27.173562  276553 logs.go:282] 0 containers: []
	W1216 11:56:27.173574  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:27.173581  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:27.173649  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:27.211989  276553 cri.go:89] found id: ""
	I1216 11:56:27.212025  276553 logs.go:282] 0 containers: []
	W1216 11:56:27.212037  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:27.212052  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:27.212075  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:27.264675  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:27.264719  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:27.278644  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:27.278678  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:27.354350  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:27.354377  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:27.354395  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:27.437076  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:27.437126  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:29.981320  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:29.995083  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:29.995162  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:30.029463  276553 cri.go:89] found id: ""
	I1216 11:56:30.029497  276553 logs.go:282] 0 containers: []
	W1216 11:56:30.029508  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:30.029516  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:30.029586  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:30.064710  276553 cri.go:89] found id: ""
	I1216 11:56:30.064743  276553 logs.go:282] 0 containers: []
	W1216 11:56:30.064755  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:30.064762  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:30.064828  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:30.098615  276553 cri.go:89] found id: ""
	I1216 11:56:30.098650  276553 logs.go:282] 0 containers: []
	W1216 11:56:30.098662  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:30.098670  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:30.098741  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:30.131713  276553 cri.go:89] found id: ""
	I1216 11:56:30.131749  276553 logs.go:282] 0 containers: []
	W1216 11:56:30.131759  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:30.131766  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:30.131834  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:30.170353  276553 cri.go:89] found id: ""
	I1216 11:56:30.170388  276553 logs.go:282] 0 containers: []
	W1216 11:56:30.170398  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:30.170406  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:30.170472  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:30.204139  276553 cri.go:89] found id: ""
	I1216 11:56:30.204175  276553 logs.go:282] 0 containers: []
	W1216 11:56:30.204186  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:30.204193  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:30.204268  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:30.240869  276553 cri.go:89] found id: ""
	I1216 11:56:30.240902  276553 logs.go:282] 0 containers: []
	W1216 11:56:30.240910  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:30.240916  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:30.240979  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:30.280249  276553 cri.go:89] found id: ""
	I1216 11:56:30.280289  276553 logs.go:282] 0 containers: []
	W1216 11:56:30.280308  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:30.280320  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:30.280336  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:30.355777  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:30.355800  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:30.355815  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:30.430350  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:30.430397  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:30.472287  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:30.472322  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:30.524120  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:30.524182  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:33.040553  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:33.053031  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:33.053110  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:33.085779  276553 cri.go:89] found id: ""
	I1216 11:56:33.085816  276553 logs.go:282] 0 containers: []
	W1216 11:56:33.085847  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:33.085856  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:33.085914  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:33.121045  276553 cri.go:89] found id: ""
	I1216 11:56:33.121079  276553 logs.go:282] 0 containers: []
	W1216 11:56:33.121087  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:33.121093  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:33.121167  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:33.157362  276553 cri.go:89] found id: ""
	I1216 11:56:33.157391  276553 logs.go:282] 0 containers: []
	W1216 11:56:33.157400  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:33.157406  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:33.157464  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:33.191412  276553 cri.go:89] found id: ""
	I1216 11:56:33.191447  276553 logs.go:282] 0 containers: []
	W1216 11:56:33.191460  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:33.191468  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:33.191534  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:33.223681  276553 cri.go:89] found id: ""
	I1216 11:56:33.223714  276553 logs.go:282] 0 containers: []
	W1216 11:56:33.223729  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:33.223737  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:33.223803  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:33.259179  276553 cri.go:89] found id: ""
	I1216 11:56:33.259210  276553 logs.go:282] 0 containers: []
	W1216 11:56:33.259219  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:33.259226  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:33.259293  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:33.292572  276553 cri.go:89] found id: ""
	I1216 11:56:33.292609  276553 logs.go:282] 0 containers: []
	W1216 11:56:33.292617  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:33.292623  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:33.292683  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:33.329346  276553 cri.go:89] found id: ""
	I1216 11:56:33.329381  276553 logs.go:282] 0 containers: []
	W1216 11:56:33.329393  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:33.329407  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:33.329425  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:33.410759  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:33.410811  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:33.450820  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:33.450857  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:33.507094  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:33.507150  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:33.521508  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:33.521577  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:33.594329  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:36.095482  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:36.109439  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:36.109524  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:36.146736  276553 cri.go:89] found id: ""
	I1216 11:56:36.146769  276553 logs.go:282] 0 containers: []
	W1216 11:56:36.146778  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:36.146785  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:36.146844  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:36.184597  276553 cri.go:89] found id: ""
	I1216 11:56:36.184637  276553 logs.go:282] 0 containers: []
	W1216 11:56:36.184649  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:36.184660  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:36.184724  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:36.220053  276553 cri.go:89] found id: ""
	I1216 11:56:36.220093  276553 logs.go:282] 0 containers: []
	W1216 11:56:36.220106  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:36.220114  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:36.220190  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:36.256115  276553 cri.go:89] found id: ""
	I1216 11:56:36.256153  276553 logs.go:282] 0 containers: []
	W1216 11:56:36.256165  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:36.256173  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:36.256246  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:36.294114  276553 cri.go:89] found id: ""
	I1216 11:56:36.294149  276553 logs.go:282] 0 containers: []
	W1216 11:56:36.294158  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:36.294165  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:36.294232  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:36.330063  276553 cri.go:89] found id: ""
	I1216 11:56:36.330095  276553 logs.go:282] 0 containers: []
	W1216 11:56:36.330103  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:36.330110  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:36.330161  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:36.365397  276553 cri.go:89] found id: ""
	I1216 11:56:36.365430  276553 logs.go:282] 0 containers: []
	W1216 11:56:36.365438  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:36.365444  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:36.365498  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:36.399522  276553 cri.go:89] found id: ""
	I1216 11:56:36.399564  276553 logs.go:282] 0 containers: []
	W1216 11:56:36.399582  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:36.399595  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:36.399610  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:36.450513  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:36.450558  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:36.467757  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:36.467787  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:36.545005  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:36.545036  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:36.545048  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:36.623149  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:36.623189  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:39.165002  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:39.178965  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:39.179027  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:39.212080  276553 cri.go:89] found id: ""
	I1216 11:56:39.212126  276553 logs.go:282] 0 containers: []
	W1216 11:56:39.212137  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:39.212144  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:39.212199  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:39.249233  276553 cri.go:89] found id: ""
	I1216 11:56:39.249276  276553 logs.go:282] 0 containers: []
	W1216 11:56:39.249290  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:39.249298  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:39.249380  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:39.280720  276553 cri.go:89] found id: ""
	I1216 11:56:39.280754  276553 logs.go:282] 0 containers: []
	W1216 11:56:39.280766  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:39.280775  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:39.280826  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:39.313020  276553 cri.go:89] found id: ""
	I1216 11:56:39.313059  276553 logs.go:282] 0 containers: []
	W1216 11:56:39.313072  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:39.313080  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:39.313141  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:39.347528  276553 cri.go:89] found id: ""
	I1216 11:56:39.347556  276553 logs.go:282] 0 containers: []
	W1216 11:56:39.347564  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:39.347570  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:39.347621  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:39.380903  276553 cri.go:89] found id: ""
	I1216 11:56:39.380938  276553 logs.go:282] 0 containers: []
	W1216 11:56:39.380962  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:39.380972  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:39.381032  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:39.413991  276553 cri.go:89] found id: ""
	I1216 11:56:39.414025  276553 logs.go:282] 0 containers: []
	W1216 11:56:39.414035  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:39.414041  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:39.414095  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:39.447052  276553 cri.go:89] found id: ""
	I1216 11:56:39.447092  276553 logs.go:282] 0 containers: []
	W1216 11:56:39.447103  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:39.447114  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:39.447127  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:39.460529  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:39.460559  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:39.534353  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:39.534376  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:39.534389  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:39.621479  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:39.621519  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:39.662821  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:39.662849  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:42.217775  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:42.239745  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:42.239804  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:42.273906  276553 cri.go:89] found id: ""
	I1216 11:56:42.273939  276553 logs.go:282] 0 containers: []
	W1216 11:56:42.273952  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:42.273960  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:42.274023  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:42.307846  276553 cri.go:89] found id: ""
	I1216 11:56:42.307873  276553 logs.go:282] 0 containers: []
	W1216 11:56:42.307882  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:42.307887  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:42.307949  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:42.342374  276553 cri.go:89] found id: ""
	I1216 11:56:42.342400  276553 logs.go:282] 0 containers: []
	W1216 11:56:42.342409  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:42.342414  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:42.342461  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:42.376514  276553 cri.go:89] found id: ""
	I1216 11:56:42.376544  276553 logs.go:282] 0 containers: []
	W1216 11:56:42.376554  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:42.376562  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:42.376631  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:42.410405  276553 cri.go:89] found id: ""
	I1216 11:56:42.410447  276553 logs.go:282] 0 containers: []
	W1216 11:56:42.410459  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:42.410468  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:42.410547  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:42.443796  276553 cri.go:89] found id: ""
	I1216 11:56:42.443825  276553 logs.go:282] 0 containers: []
	W1216 11:56:42.443834  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:42.443840  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:42.443903  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:42.476133  276553 cri.go:89] found id: ""
	I1216 11:56:42.476167  276553 logs.go:282] 0 containers: []
	W1216 11:56:42.476180  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:42.476187  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:42.476253  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:42.510796  276553 cri.go:89] found id: ""
	I1216 11:56:42.510826  276553 logs.go:282] 0 containers: []
	W1216 11:56:42.510835  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:42.510847  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:42.510863  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:42.561766  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:42.561813  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:42.575147  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:42.575180  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:42.648335  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:42.648358  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:42.648371  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:42.723067  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:42.723109  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:45.263820  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:45.281239  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:45.281316  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:45.325143  276553 cri.go:89] found id: ""
	I1216 11:56:45.325178  276553 logs.go:282] 0 containers: []
	W1216 11:56:45.325190  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:45.325197  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:45.325267  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:45.368514  276553 cri.go:89] found id: ""
	I1216 11:56:45.368542  276553 logs.go:282] 0 containers: []
	W1216 11:56:45.368554  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:45.368561  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:45.368635  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:45.405770  276553 cri.go:89] found id: ""
	I1216 11:56:45.405802  276553 logs.go:282] 0 containers: []
	W1216 11:56:45.405814  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:45.405822  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:45.405889  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:45.444299  276553 cri.go:89] found id: ""
	I1216 11:56:45.444336  276553 logs.go:282] 0 containers: []
	W1216 11:56:45.444356  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:45.444364  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:45.444435  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:45.481000  276553 cri.go:89] found id: ""
	I1216 11:56:45.481033  276553 logs.go:282] 0 containers: []
	W1216 11:56:45.481045  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:45.481054  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:45.481133  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:45.531913  276553 cri.go:89] found id: ""
	I1216 11:56:45.531949  276553 logs.go:282] 0 containers: []
	W1216 11:56:45.531962  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:45.531970  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:45.532035  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:45.566204  276553 cri.go:89] found id: ""
	I1216 11:56:45.566235  276553 logs.go:282] 0 containers: []
	W1216 11:56:45.566246  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:45.566254  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:45.566314  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:45.599978  276553 cri.go:89] found id: ""
	I1216 11:56:45.600008  276553 logs.go:282] 0 containers: []
	W1216 11:56:45.600017  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:45.600026  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:45.600038  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:45.660446  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:45.660487  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:45.674086  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:45.674116  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:45.741311  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:45.741335  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:45.741354  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:45.816042  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:45.816087  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:48.358918  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:48.371702  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:48.371793  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:48.407798  276553 cri.go:89] found id: ""
	I1216 11:56:48.407830  276553 logs.go:282] 0 containers: []
	W1216 11:56:48.407843  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:48.407852  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:48.407904  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:48.441384  276553 cri.go:89] found id: ""
	I1216 11:56:48.441416  276553 logs.go:282] 0 containers: []
	W1216 11:56:48.441427  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:48.441435  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:48.441492  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:48.473516  276553 cri.go:89] found id: ""
	I1216 11:56:48.473547  276553 logs.go:282] 0 containers: []
	W1216 11:56:48.473557  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:48.473563  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:48.473625  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:48.506814  276553 cri.go:89] found id: ""
	I1216 11:56:48.506847  276553 logs.go:282] 0 containers: []
	W1216 11:56:48.506859  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:48.506868  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:48.506919  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:48.539605  276553 cri.go:89] found id: ""
	I1216 11:56:48.539634  276553 logs.go:282] 0 containers: []
	W1216 11:56:48.539643  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:48.539649  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:48.539697  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:48.570955  276553 cri.go:89] found id: ""
	I1216 11:56:48.570987  276553 logs.go:282] 0 containers: []
	W1216 11:56:48.571000  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:48.571006  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:48.571058  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:48.626635  276553 cri.go:89] found id: ""
	I1216 11:56:48.626671  276553 logs.go:282] 0 containers: []
	W1216 11:56:48.626681  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:48.626743  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:48.626812  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:48.675575  276553 cri.go:89] found id: ""
	I1216 11:56:48.675620  276553 logs.go:282] 0 containers: []
	W1216 11:56:48.675633  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:48.675647  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:48.675666  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:48.766323  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:48.766353  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:48.766371  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:48.843736  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:48.843785  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:48.883739  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:48.883780  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:48.938090  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:48.938124  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:51.455650  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:51.468435  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:51.468510  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:51.504100  276553 cri.go:89] found id: ""
	I1216 11:56:51.504132  276553 logs.go:282] 0 containers: []
	W1216 11:56:51.504140  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:51.504146  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:51.504198  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:51.536638  276553 cri.go:89] found id: ""
	I1216 11:56:51.536674  276553 logs.go:282] 0 containers: []
	W1216 11:56:51.536685  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:51.536691  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:51.536752  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:51.569255  276553 cri.go:89] found id: ""
	I1216 11:56:51.569305  276553 logs.go:282] 0 containers: []
	W1216 11:56:51.569319  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:51.569327  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:51.569428  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:51.602869  276553 cri.go:89] found id: ""
	I1216 11:56:51.602898  276553 logs.go:282] 0 containers: []
	W1216 11:56:51.602907  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:51.602913  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:51.602961  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:51.635270  276553 cri.go:89] found id: ""
	I1216 11:56:51.635318  276553 logs.go:282] 0 containers: []
	W1216 11:56:51.635330  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:51.635337  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:51.635391  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:51.674241  276553 cri.go:89] found id: ""
	I1216 11:56:51.674280  276553 logs.go:282] 0 containers: []
	W1216 11:56:51.674293  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:51.674301  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:51.674374  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:51.710737  276553 cri.go:89] found id: ""
	I1216 11:56:51.710765  276553 logs.go:282] 0 containers: []
	W1216 11:56:51.710774  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:51.710797  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:51.710854  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:51.744195  276553 cri.go:89] found id: ""
	I1216 11:56:51.744224  276553 logs.go:282] 0 containers: []
	W1216 11:56:51.744232  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:51.744241  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:51.744254  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:51.793803  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:51.793844  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:51.807508  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:51.807536  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:51.882642  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:51.882667  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:51.882684  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:51.958561  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:51.958602  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:54.501323  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:54.513666  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:54.513733  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:54.551262  276553 cri.go:89] found id: ""
	I1216 11:56:54.551295  276553 logs.go:282] 0 containers: []
	W1216 11:56:54.551303  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:54.551309  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:54.551369  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:54.587620  276553 cri.go:89] found id: ""
	I1216 11:56:54.587651  276553 logs.go:282] 0 containers: []
	W1216 11:56:54.587658  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:54.587665  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:54.587716  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:54.620305  276553 cri.go:89] found id: ""
	I1216 11:56:54.620336  276553 logs.go:282] 0 containers: []
	W1216 11:56:54.620344  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:54.620351  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:54.620414  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:54.652045  276553 cri.go:89] found id: ""
	I1216 11:56:54.652078  276553 logs.go:282] 0 containers: []
	W1216 11:56:54.652087  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:54.652093  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:54.652144  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:54.684524  276553 cri.go:89] found id: ""
	I1216 11:56:54.684561  276553 logs.go:282] 0 containers: []
	W1216 11:56:54.684572  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:54.684580  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:54.684656  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:54.716644  276553 cri.go:89] found id: ""
	I1216 11:56:54.716680  276553 logs.go:282] 0 containers: []
	W1216 11:56:54.716690  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:54.716696  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:54.716745  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:54.750515  276553 cri.go:89] found id: ""
	I1216 11:56:54.750544  276553 logs.go:282] 0 containers: []
	W1216 11:56:54.750557  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:54.750565  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:54.750623  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:54.784629  276553 cri.go:89] found id: ""
	I1216 11:56:54.784659  276553 logs.go:282] 0 containers: []
	W1216 11:56:54.784667  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:54.784677  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:54.784690  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:54.832093  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:54.832133  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:54.845354  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:54.845388  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:54.909734  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:54.909764  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:54.909780  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:56:54.985736  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:54.985778  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:57.527135  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:56:57.540161  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:56:57.540249  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:56:57.573091  276553 cri.go:89] found id: ""
	I1216 11:56:57.573123  276553 logs.go:282] 0 containers: []
	W1216 11:56:57.573134  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:56:57.573142  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:56:57.573217  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:56:57.606727  276553 cri.go:89] found id: ""
	I1216 11:56:57.606765  276553 logs.go:282] 0 containers: []
	W1216 11:56:57.606777  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:56:57.606786  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:56:57.606839  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:56:57.643629  276553 cri.go:89] found id: ""
	I1216 11:56:57.643670  276553 logs.go:282] 0 containers: []
	W1216 11:56:57.643683  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:56:57.643691  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:56:57.643758  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:56:57.681297  276553 cri.go:89] found id: ""
	I1216 11:56:57.681336  276553 logs.go:282] 0 containers: []
	W1216 11:56:57.681348  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:56:57.681356  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:56:57.681417  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:56:57.715434  276553 cri.go:89] found id: ""
	I1216 11:56:57.715468  276553 logs.go:282] 0 containers: []
	W1216 11:56:57.715480  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:56:57.715487  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:56:57.715559  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:56:57.753674  276553 cri.go:89] found id: ""
	I1216 11:56:57.753708  276553 logs.go:282] 0 containers: []
	W1216 11:56:57.753717  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:56:57.753724  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:56:57.753776  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:56:57.789681  276553 cri.go:89] found id: ""
	I1216 11:56:57.789712  276553 logs.go:282] 0 containers: []
	W1216 11:56:57.789723  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:56:57.789730  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:56:57.789793  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:56:57.824474  276553 cri.go:89] found id: ""
	I1216 11:56:57.824507  276553 logs.go:282] 0 containers: []
	W1216 11:56:57.824515  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:56:57.824525  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:56:57.824537  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:56:57.860725  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:56:57.860770  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:56:57.909823  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:56:57.909863  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:56:57.923103  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:56:57.923140  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:56:57.992126  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:56:57.992154  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:56:57.992172  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:00.573917  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:00.586385  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:00.586454  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:00.619041  276553 cri.go:89] found id: ""
	I1216 11:57:00.619076  276553 logs.go:282] 0 containers: []
	W1216 11:57:00.619089  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:00.619097  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:00.619154  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:00.651213  276553 cri.go:89] found id: ""
	I1216 11:57:00.651252  276553 logs.go:282] 0 containers: []
	W1216 11:57:00.651261  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:00.651269  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:00.651323  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:00.684041  276553 cri.go:89] found id: ""
	I1216 11:57:00.684075  276553 logs.go:282] 0 containers: []
	W1216 11:57:00.684087  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:00.684095  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:00.684169  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:00.718067  276553 cri.go:89] found id: ""
	I1216 11:57:00.718097  276553 logs.go:282] 0 containers: []
	W1216 11:57:00.718105  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:00.718115  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:00.718167  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:00.750453  276553 cri.go:89] found id: ""
	I1216 11:57:00.750485  276553 logs.go:282] 0 containers: []
	W1216 11:57:00.750497  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:00.750505  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:00.750558  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:00.782906  276553 cri.go:89] found id: ""
	I1216 11:57:00.782935  276553 logs.go:282] 0 containers: []
	W1216 11:57:00.782943  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:00.782955  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:00.783004  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:00.820821  276553 cri.go:89] found id: ""
	I1216 11:57:00.820857  276553 logs.go:282] 0 containers: []
	W1216 11:57:00.820868  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:00.820875  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:00.820924  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:00.856917  276553 cri.go:89] found id: ""
	I1216 11:57:00.856972  276553 logs.go:282] 0 containers: []
	W1216 11:57:00.856986  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:00.857000  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:00.857017  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:00.907836  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:00.907880  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:00.920548  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:00.920580  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:00.985639  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:00.985676  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:00.985691  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:01.062550  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:01.062595  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:03.599350  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:03.614986  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:03.615065  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:03.646774  276553 cri.go:89] found id: ""
	I1216 11:57:03.646806  276553 logs.go:282] 0 containers: []
	W1216 11:57:03.646815  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:03.646822  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:03.646886  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:03.687832  276553 cri.go:89] found id: ""
	I1216 11:57:03.687867  276553 logs.go:282] 0 containers: []
	W1216 11:57:03.687879  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:03.687887  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:03.687951  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:03.721928  276553 cri.go:89] found id: ""
	I1216 11:57:03.721958  276553 logs.go:282] 0 containers: []
	W1216 11:57:03.721966  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:03.721973  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:03.722033  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:03.756274  276553 cri.go:89] found id: ""
	I1216 11:57:03.756307  276553 logs.go:282] 0 containers: []
	W1216 11:57:03.756318  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:03.756326  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:03.756388  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:03.787411  276553 cri.go:89] found id: ""
	I1216 11:57:03.787439  276553 logs.go:282] 0 containers: []
	W1216 11:57:03.787451  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:03.787460  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:03.787518  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:03.824294  276553 cri.go:89] found id: ""
	I1216 11:57:03.824332  276553 logs.go:282] 0 containers: []
	W1216 11:57:03.824343  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:03.824351  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:03.824417  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:03.859211  276553 cri.go:89] found id: ""
	I1216 11:57:03.859249  276553 logs.go:282] 0 containers: []
	W1216 11:57:03.859258  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:03.859265  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:03.859339  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:03.892849  276553 cri.go:89] found id: ""
	I1216 11:57:03.892885  276553 logs.go:282] 0 containers: []
	W1216 11:57:03.892898  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:03.892912  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:03.892932  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:03.934012  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:03.934046  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:04.001048  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:04.001091  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:04.015791  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:04.015821  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:04.082024  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:04.082052  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:04.082068  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:06.660853  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:06.673746  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:06.673830  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:06.710787  276553 cri.go:89] found id: ""
	I1216 11:57:06.710819  276553 logs.go:282] 0 containers: []
	W1216 11:57:06.710831  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:06.710843  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:06.710907  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:06.746823  276553 cri.go:89] found id: ""
	I1216 11:57:06.746856  276553 logs.go:282] 0 containers: []
	W1216 11:57:06.746867  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:06.746874  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:06.746940  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:06.779105  276553 cri.go:89] found id: ""
	I1216 11:57:06.779137  276553 logs.go:282] 0 containers: []
	W1216 11:57:06.779149  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:06.779166  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:06.779227  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:06.812297  276553 cri.go:89] found id: ""
	I1216 11:57:06.812335  276553 logs.go:282] 0 containers: []
	W1216 11:57:06.812354  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:06.812362  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:06.812428  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:06.851504  276553 cri.go:89] found id: ""
	I1216 11:57:06.851539  276553 logs.go:282] 0 containers: []
	W1216 11:57:06.851548  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:06.851554  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:06.851620  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:06.887840  276553 cri.go:89] found id: ""
	I1216 11:57:06.887871  276553 logs.go:282] 0 containers: []
	W1216 11:57:06.887879  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:06.887885  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:06.887945  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:06.919796  276553 cri.go:89] found id: ""
	I1216 11:57:06.919827  276553 logs.go:282] 0 containers: []
	W1216 11:57:06.919840  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:06.919847  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:06.919908  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:06.957998  276553 cri.go:89] found id: ""
	I1216 11:57:06.958033  276553 logs.go:282] 0 containers: []
	W1216 11:57:06.958044  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:06.958056  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:06.958072  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:07.020906  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:07.020973  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:07.037157  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:07.037191  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:07.110808  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:07.110839  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:07.110855  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:07.212691  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:07.212730  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:09.757821  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:09.778737  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:09.778806  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:09.835122  276553 cri.go:89] found id: ""
	I1216 11:57:09.835162  276553 logs.go:282] 0 containers: []
	W1216 11:57:09.835174  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:09.835181  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:09.835247  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:09.882404  276553 cri.go:89] found id: ""
	I1216 11:57:09.882451  276553 logs.go:282] 0 containers: []
	W1216 11:57:09.882462  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:09.882470  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:09.882540  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:09.923780  276553 cri.go:89] found id: ""
	I1216 11:57:09.923807  276553 logs.go:282] 0 containers: []
	W1216 11:57:09.923815  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:09.923823  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:09.923874  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:09.971939  276553 cri.go:89] found id: ""
	I1216 11:57:09.971967  276553 logs.go:282] 0 containers: []
	W1216 11:57:09.971980  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:09.971987  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:09.972043  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:10.019977  276553 cri.go:89] found id: ""
	I1216 11:57:10.020009  276553 logs.go:282] 0 containers: []
	W1216 11:57:10.020020  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:10.020027  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:10.020089  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:10.063539  276553 cri.go:89] found id: ""
	I1216 11:57:10.063568  276553 logs.go:282] 0 containers: []
	W1216 11:57:10.063579  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:10.063587  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:10.063641  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:10.102487  276553 cri.go:89] found id: ""
	I1216 11:57:10.102520  276553 logs.go:282] 0 containers: []
	W1216 11:57:10.102532  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:10.102539  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:10.102601  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:10.141843  276553 cri.go:89] found id: ""
	I1216 11:57:10.141879  276553 logs.go:282] 0 containers: []
	W1216 11:57:10.141892  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:10.141905  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:10.141921  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:10.204689  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:10.204737  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:10.219943  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:10.219977  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:10.299531  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:10.299561  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:10.299579  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:10.384515  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:10.384557  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:12.923369  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:12.937784  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:12.937851  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:12.984812  276553 cri.go:89] found id: ""
	I1216 11:57:12.984845  276553 logs.go:282] 0 containers: []
	W1216 11:57:12.984857  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:12.984866  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:12.984928  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:13.028751  276553 cri.go:89] found id: ""
	I1216 11:57:13.028784  276553 logs.go:282] 0 containers: []
	W1216 11:57:13.028794  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:13.028800  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:13.028846  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:13.075310  276553 cri.go:89] found id: ""
	I1216 11:57:13.075357  276553 logs.go:282] 0 containers: []
	W1216 11:57:13.075375  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:13.075384  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:13.075442  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:13.120486  276553 cri.go:89] found id: ""
	I1216 11:57:13.120513  276553 logs.go:282] 0 containers: []
	W1216 11:57:13.120521  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:13.120527  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:13.120585  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:13.166189  276553 cri.go:89] found id: ""
	I1216 11:57:13.166215  276553 logs.go:282] 0 containers: []
	W1216 11:57:13.166226  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:13.166233  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:13.166289  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:13.209483  276553 cri.go:89] found id: ""
	I1216 11:57:13.209512  276553 logs.go:282] 0 containers: []
	W1216 11:57:13.209524  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:13.209532  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:13.209596  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:13.247328  276553 cri.go:89] found id: ""
	I1216 11:57:13.247359  276553 logs.go:282] 0 containers: []
	W1216 11:57:13.247369  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:13.247377  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:13.247439  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:13.285552  276553 cri.go:89] found id: ""
	I1216 11:57:13.285591  276553 logs.go:282] 0 containers: []
	W1216 11:57:13.285606  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:13.285619  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:13.285635  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:13.360273  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:13.360298  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:13.360315  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:13.459012  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:13.459050  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:13.506863  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:13.506895  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:13.574616  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:13.574653  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:16.090654  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:16.107566  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:16.107654  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:16.142531  276553 cri.go:89] found id: ""
	I1216 11:57:16.142571  276553 logs.go:282] 0 containers: []
	W1216 11:57:16.142583  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:16.142592  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:16.142655  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:16.185843  276553 cri.go:89] found id: ""
	I1216 11:57:16.185876  276553 logs.go:282] 0 containers: []
	W1216 11:57:16.185885  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:16.185891  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:16.185950  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:16.225461  276553 cri.go:89] found id: ""
	I1216 11:57:16.225493  276553 logs.go:282] 0 containers: []
	W1216 11:57:16.225504  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:16.225513  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:16.225566  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:16.265150  276553 cri.go:89] found id: ""
	I1216 11:57:16.265178  276553 logs.go:282] 0 containers: []
	W1216 11:57:16.265187  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:16.265192  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:16.265249  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:16.303057  276553 cri.go:89] found id: ""
	I1216 11:57:16.303091  276553 logs.go:282] 0 containers: []
	W1216 11:57:16.303104  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:16.303112  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:16.303178  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:16.340980  276553 cri.go:89] found id: ""
	I1216 11:57:16.341012  276553 logs.go:282] 0 containers: []
	W1216 11:57:16.341021  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:16.341027  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:16.341077  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:16.375176  276553 cri.go:89] found id: ""
	I1216 11:57:16.375206  276553 logs.go:282] 0 containers: []
	W1216 11:57:16.375215  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:16.375234  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:16.375341  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:16.411947  276553 cri.go:89] found id: ""
	I1216 11:57:16.411993  276553 logs.go:282] 0 containers: []
	W1216 11:57:16.412006  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:16.412019  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:16.412036  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:16.477881  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:16.477919  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:16.491889  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:16.491923  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:16.571586  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:16.571607  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:16.571619  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:16.651757  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:16.651801  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:19.192605  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:19.210591  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:19.210679  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:19.260044  276553 cri.go:89] found id: ""
	I1216 11:57:19.260080  276553 logs.go:282] 0 containers: []
	W1216 11:57:19.260099  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:19.260106  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:19.260169  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:19.308899  276553 cri.go:89] found id: ""
	I1216 11:57:19.308944  276553 logs.go:282] 0 containers: []
	W1216 11:57:19.308978  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:19.308986  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:19.309052  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:19.355925  276553 cri.go:89] found id: ""
	I1216 11:57:19.355958  276553 logs.go:282] 0 containers: []
	W1216 11:57:19.355970  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:19.355978  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:19.356040  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:19.395581  276553 cri.go:89] found id: ""
	I1216 11:57:19.395617  276553 logs.go:282] 0 containers: []
	W1216 11:57:19.395629  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:19.395637  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:19.395711  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:19.432130  276553 cri.go:89] found id: ""
	I1216 11:57:19.432163  276553 logs.go:282] 0 containers: []
	W1216 11:57:19.432174  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:19.432183  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:19.432267  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:19.478526  276553 cri.go:89] found id: ""
	I1216 11:57:19.478560  276553 logs.go:282] 0 containers: []
	W1216 11:57:19.478572  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:19.478582  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:19.478649  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:19.523040  276553 cri.go:89] found id: ""
	I1216 11:57:19.523081  276553 logs.go:282] 0 containers: []
	W1216 11:57:19.523095  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:19.523102  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:19.523176  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:19.567956  276553 cri.go:89] found id: ""
	I1216 11:57:19.567995  276553 logs.go:282] 0 containers: []
	W1216 11:57:19.568009  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:19.568031  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:19.568048  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:19.623677  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:19.623714  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:19.641936  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:19.641979  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:19.738011  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:19.738037  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:19.738051  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:19.825126  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:19.825169  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:22.372867  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:22.387097  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:22.387176  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:22.426585  276553 cri.go:89] found id: ""
	I1216 11:57:22.426646  276553 logs.go:282] 0 containers: []
	W1216 11:57:22.426661  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:22.426672  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:22.426738  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:22.464280  276553 cri.go:89] found id: ""
	I1216 11:57:22.464316  276553 logs.go:282] 0 containers: []
	W1216 11:57:22.464329  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:22.464338  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:22.464408  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:22.515235  276553 cri.go:89] found id: ""
	I1216 11:57:22.515274  276553 logs.go:282] 0 containers: []
	W1216 11:57:22.515286  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:22.515294  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:22.515376  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:22.563715  276553 cri.go:89] found id: ""
	I1216 11:57:22.563744  276553 logs.go:282] 0 containers: []
	W1216 11:57:22.563755  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:22.563763  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:22.563829  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:22.604156  276553 cri.go:89] found id: ""
	I1216 11:57:22.604192  276553 logs.go:282] 0 containers: []
	W1216 11:57:22.604204  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:22.604212  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:22.604287  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:22.648931  276553 cri.go:89] found id: ""
	I1216 11:57:22.648974  276553 logs.go:282] 0 containers: []
	W1216 11:57:22.648986  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:22.648995  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:22.649068  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:22.693154  276553 cri.go:89] found id: ""
	I1216 11:57:22.693197  276553 logs.go:282] 0 containers: []
	W1216 11:57:22.693211  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:22.693232  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:22.693311  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:22.740051  276553 cri.go:89] found id: ""
	I1216 11:57:22.740093  276553 logs.go:282] 0 containers: []
	W1216 11:57:22.740109  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:22.740123  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:22.740139  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:22.855650  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:22.855697  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:22.897412  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:22.897468  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:22.977209  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:22.977271  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:22.996436  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:22.996476  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:23.107580  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:25.608405  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:25.621882  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:25.621953  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:25.658599  276553 cri.go:89] found id: ""
	I1216 11:57:25.658630  276553 logs.go:282] 0 containers: []
	W1216 11:57:25.658641  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:25.658648  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:25.658709  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:25.700371  276553 cri.go:89] found id: ""
	I1216 11:57:25.700406  276553 logs.go:282] 0 containers: []
	W1216 11:57:25.700417  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:25.700424  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:25.700484  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:25.736452  276553 cri.go:89] found id: ""
	I1216 11:57:25.736486  276553 logs.go:282] 0 containers: []
	W1216 11:57:25.736497  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:25.736505  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:25.736566  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:25.768714  276553 cri.go:89] found id: ""
	I1216 11:57:25.768750  276553 logs.go:282] 0 containers: []
	W1216 11:57:25.768761  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:25.768769  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:25.768842  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:25.801163  276553 cri.go:89] found id: ""
	I1216 11:57:25.801193  276553 logs.go:282] 0 containers: []
	W1216 11:57:25.801201  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:25.801206  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:25.801259  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:25.832094  276553 cri.go:89] found id: ""
	I1216 11:57:25.832159  276553 logs.go:282] 0 containers: []
	W1216 11:57:25.832170  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:25.832176  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:25.832289  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:25.872457  276553 cri.go:89] found id: ""
	I1216 11:57:25.872488  276553 logs.go:282] 0 containers: []
	W1216 11:57:25.872499  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:25.872507  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:25.872562  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:25.916751  276553 cri.go:89] found id: ""
	I1216 11:57:25.916788  276553 logs.go:282] 0 containers: []
	W1216 11:57:25.916801  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:25.916814  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:25.916831  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:25.972117  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:25.972162  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:25.987340  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:25.987387  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:26.071861  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:26.071895  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:26.071912  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:26.152393  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:26.152435  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:28.693062  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:28.710594  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:28.710679  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:28.746677  276553 cri.go:89] found id: ""
	I1216 11:57:28.746719  276553 logs.go:282] 0 containers: []
	W1216 11:57:28.746732  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:28.746748  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:28.746808  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:28.783817  276553 cri.go:89] found id: ""
	I1216 11:57:28.783855  276553 logs.go:282] 0 containers: []
	W1216 11:57:28.783867  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:28.783875  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:28.783932  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:28.823726  276553 cri.go:89] found id: ""
	I1216 11:57:28.823776  276553 logs.go:282] 0 containers: []
	W1216 11:57:28.823788  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:28.823796  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:28.823860  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:28.862290  276553 cri.go:89] found id: ""
	I1216 11:57:28.862324  276553 logs.go:282] 0 containers: []
	W1216 11:57:28.862335  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:28.862346  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:28.862421  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:28.899353  276553 cri.go:89] found id: ""
	I1216 11:57:28.899441  276553 logs.go:282] 0 containers: []
	W1216 11:57:28.899465  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:28.899482  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:28.899556  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:28.953450  276553 cri.go:89] found id: ""
	I1216 11:57:28.953490  276553 logs.go:282] 0 containers: []
	W1216 11:57:28.953505  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:28.953514  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:28.953583  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:29.009986  276553 cri.go:89] found id: ""
	I1216 11:57:29.010022  276553 logs.go:282] 0 containers: []
	W1216 11:57:29.010034  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:29.010042  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:29.010104  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:29.076922  276553 cri.go:89] found id: ""
	I1216 11:57:29.076978  276553 logs.go:282] 0 containers: []
	W1216 11:57:29.076991  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:29.077004  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:29.077028  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:29.147262  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:29.147300  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:29.165691  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:29.165733  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:29.254596  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:29.254633  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:29.254652  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:29.342313  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:29.342354  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:31.889378  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:31.902362  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:31.902433  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:31.937019  276553 cri.go:89] found id: ""
	I1216 11:57:31.937060  276553 logs.go:282] 0 containers: []
	W1216 11:57:31.937073  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:31.937083  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:31.937148  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:31.970510  276553 cri.go:89] found id: ""
	I1216 11:57:31.970558  276553 logs.go:282] 0 containers: []
	W1216 11:57:31.970570  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:31.970578  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:31.970638  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:32.005641  276553 cri.go:89] found id: ""
	I1216 11:57:32.005681  276553 logs.go:282] 0 containers: []
	W1216 11:57:32.005692  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:32.005699  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:32.005757  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:32.043655  276553 cri.go:89] found id: ""
	I1216 11:57:32.043693  276553 logs.go:282] 0 containers: []
	W1216 11:57:32.043704  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:32.043713  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:32.043779  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:32.083743  276553 cri.go:89] found id: ""
	I1216 11:57:32.083769  276553 logs.go:282] 0 containers: []
	W1216 11:57:32.083777  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:32.083788  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:32.083838  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:32.120606  276553 cri.go:89] found id: ""
	I1216 11:57:32.120644  276553 logs.go:282] 0 containers: []
	W1216 11:57:32.120655  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:32.120662  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:32.120711  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:32.158641  276553 cri.go:89] found id: ""
	I1216 11:57:32.158671  276553 logs.go:282] 0 containers: []
	W1216 11:57:32.158679  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:32.158685  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:32.158734  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:32.196286  276553 cri.go:89] found id: ""
	I1216 11:57:32.196319  276553 logs.go:282] 0 containers: []
	W1216 11:57:32.196329  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:32.196341  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:32.196355  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:32.235498  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:32.235530  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:32.288742  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:32.288785  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:32.302713  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:32.302742  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:32.370490  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:32.370512  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:32.370525  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:34.952729  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:34.967337  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:34.967407  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:35.009703  276553 cri.go:89] found id: ""
	I1216 11:57:35.009751  276553 logs.go:282] 0 containers: []
	W1216 11:57:35.009763  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:35.009772  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:35.009826  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:35.046725  276553 cri.go:89] found id: ""
	I1216 11:57:35.046764  276553 logs.go:282] 0 containers: []
	W1216 11:57:35.046777  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:35.046785  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:35.046855  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:35.081092  276553 cri.go:89] found id: ""
	I1216 11:57:35.081140  276553 logs.go:282] 0 containers: []
	W1216 11:57:35.081152  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:35.081162  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:35.081222  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:35.114747  276553 cri.go:89] found id: ""
	I1216 11:57:35.114774  276553 logs.go:282] 0 containers: []
	W1216 11:57:35.114781  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:35.114787  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:35.114839  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:35.155176  276553 cri.go:89] found id: ""
	I1216 11:57:35.155200  276553 logs.go:282] 0 containers: []
	W1216 11:57:35.155208  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:35.155214  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:35.155268  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:35.190770  276553 cri.go:89] found id: ""
	I1216 11:57:35.190799  276553 logs.go:282] 0 containers: []
	W1216 11:57:35.190809  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:35.190816  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:35.190866  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:35.227625  276553 cri.go:89] found id: ""
	I1216 11:57:35.227653  276553 logs.go:282] 0 containers: []
	W1216 11:57:35.227663  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:35.227671  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:35.227726  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:35.264426  276553 cri.go:89] found id: ""
	I1216 11:57:35.264451  276553 logs.go:282] 0 containers: []
	W1216 11:57:35.264460  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:35.264469  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:35.264480  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:35.323330  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:35.323363  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:35.338499  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:35.338528  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:35.404670  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:35.404690  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:35.404703  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:35.488681  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:35.488723  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:38.035134  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:38.047506  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:38.047573  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:38.081181  276553 cri.go:89] found id: ""
	I1216 11:57:38.081217  276553 logs.go:282] 0 containers: []
	W1216 11:57:38.081230  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:38.081238  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:38.081303  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:38.120089  276553 cri.go:89] found id: ""
	I1216 11:57:38.120128  276553 logs.go:282] 0 containers: []
	W1216 11:57:38.120141  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:38.120158  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:38.120223  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:38.157965  276553 cri.go:89] found id: ""
	I1216 11:57:38.158005  276553 logs.go:282] 0 containers: []
	W1216 11:57:38.158019  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:38.158027  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:38.158092  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:38.193731  276553 cri.go:89] found id: ""
	I1216 11:57:38.193764  276553 logs.go:282] 0 containers: []
	W1216 11:57:38.193773  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:38.193779  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:38.193829  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:38.226838  276553 cri.go:89] found id: ""
	I1216 11:57:38.226875  276553 logs.go:282] 0 containers: []
	W1216 11:57:38.226885  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:38.226892  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:38.226954  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:38.261826  276553 cri.go:89] found id: ""
	I1216 11:57:38.261861  276553 logs.go:282] 0 containers: []
	W1216 11:57:38.261873  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:38.261881  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:38.261945  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:38.300837  276553 cri.go:89] found id: ""
	I1216 11:57:38.300868  276553 logs.go:282] 0 containers: []
	W1216 11:57:38.300877  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:38.300883  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:38.300946  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:38.332490  276553 cri.go:89] found id: ""
	I1216 11:57:38.332522  276553 logs.go:282] 0 containers: []
	W1216 11:57:38.332533  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:38.332546  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:38.332563  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:38.345110  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:38.345150  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:38.416541  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:38.416578  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:38.416596  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:38.494287  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:38.494336  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:38.531163  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:38.531207  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:41.084809  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:41.097899  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:41.097966  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:41.138614  276553 cri.go:89] found id: ""
	I1216 11:57:41.138657  276553 logs.go:282] 0 containers: []
	W1216 11:57:41.138672  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:41.138681  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:41.138752  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:41.182122  276553 cri.go:89] found id: ""
	I1216 11:57:41.182154  276553 logs.go:282] 0 containers: []
	W1216 11:57:41.182165  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:41.182173  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:41.182236  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:41.224949  276553 cri.go:89] found id: ""
	I1216 11:57:41.225008  276553 logs.go:282] 0 containers: []
	W1216 11:57:41.225020  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:41.225028  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:41.225101  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:41.263211  276553 cri.go:89] found id: ""
	I1216 11:57:41.263251  276553 logs.go:282] 0 containers: []
	W1216 11:57:41.263320  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:41.263333  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:41.263408  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:41.314283  276553 cri.go:89] found id: ""
	I1216 11:57:41.314321  276553 logs.go:282] 0 containers: []
	W1216 11:57:41.314335  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:41.314342  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:41.314414  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:41.360655  276553 cri.go:89] found id: ""
	I1216 11:57:41.360692  276553 logs.go:282] 0 containers: []
	W1216 11:57:41.360708  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:41.360717  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:41.360783  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:41.405359  276553 cri.go:89] found id: ""
	I1216 11:57:41.405388  276553 logs.go:282] 0 containers: []
	W1216 11:57:41.405396  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:41.405402  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:41.405462  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:41.440929  276553 cri.go:89] found id: ""
	I1216 11:57:41.440981  276553 logs.go:282] 0 containers: []
	W1216 11:57:41.440994  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:41.441008  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:41.441026  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:41.456164  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:41.456231  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:41.535989  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:41.536017  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:41.536034  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:41.612733  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:41.612778  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:41.667830  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:41.667868  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:44.229246  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:44.241211  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:44.241279  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:44.274022  276553 cri.go:89] found id: ""
	I1216 11:57:44.274058  276553 logs.go:282] 0 containers: []
	W1216 11:57:44.274069  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:44.274077  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:44.274148  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:44.304815  276553 cri.go:89] found id: ""
	I1216 11:57:44.304848  276553 logs.go:282] 0 containers: []
	W1216 11:57:44.304858  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:44.304864  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:44.304912  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:44.335850  276553 cri.go:89] found id: ""
	I1216 11:57:44.335888  276553 logs.go:282] 0 containers: []
	W1216 11:57:44.335898  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:44.335904  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:44.335954  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:44.367804  276553 cri.go:89] found id: ""
	I1216 11:57:44.367837  276553 logs.go:282] 0 containers: []
	W1216 11:57:44.367848  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:44.367855  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:44.367912  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:44.401786  276553 cri.go:89] found id: ""
	I1216 11:57:44.401819  276553 logs.go:282] 0 containers: []
	W1216 11:57:44.401827  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:44.401833  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:44.401938  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:44.434094  276553 cri.go:89] found id: ""
	I1216 11:57:44.434130  276553 logs.go:282] 0 containers: []
	W1216 11:57:44.434142  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:44.434157  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:44.434223  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:44.470944  276553 cri.go:89] found id: ""
	I1216 11:57:44.470971  276553 logs.go:282] 0 containers: []
	W1216 11:57:44.470980  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:44.470986  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:44.471044  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:44.508969  276553 cri.go:89] found id: ""
	I1216 11:57:44.508998  276553 logs.go:282] 0 containers: []
	W1216 11:57:44.509009  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:44.509024  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:44.509039  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:44.523660  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:44.523690  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:44.607140  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:44.607168  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:44.607185  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:44.691965  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:44.691998  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:44.739634  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:44.739665  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:47.295576  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:47.308196  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:47.308269  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:47.348580  276553 cri.go:89] found id: ""
	I1216 11:57:47.348615  276553 logs.go:282] 0 containers: []
	W1216 11:57:47.348627  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:47.348634  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:47.348699  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:47.383529  276553 cri.go:89] found id: ""
	I1216 11:57:47.383564  276553 logs.go:282] 0 containers: []
	W1216 11:57:47.383577  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:47.383585  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:47.383651  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:47.421235  276553 cri.go:89] found id: ""
	I1216 11:57:47.421276  276553 logs.go:282] 0 containers: []
	W1216 11:57:47.421288  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:47.421296  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:47.421363  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:47.458280  276553 cri.go:89] found id: ""
	I1216 11:57:47.458317  276553 logs.go:282] 0 containers: []
	W1216 11:57:47.458329  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:47.458337  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:47.458397  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:47.496084  276553 cri.go:89] found id: ""
	I1216 11:57:47.496118  276553 logs.go:282] 0 containers: []
	W1216 11:57:47.496129  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:47.496138  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:47.496211  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:47.534792  276553 cri.go:89] found id: ""
	I1216 11:57:47.534821  276553 logs.go:282] 0 containers: []
	W1216 11:57:47.534832  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:47.534840  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:47.534904  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:47.575164  276553 cri.go:89] found id: ""
	I1216 11:57:47.575213  276553 logs.go:282] 0 containers: []
	W1216 11:57:47.575226  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:47.575236  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:47.575304  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:47.622393  276553 cri.go:89] found id: ""
	I1216 11:57:47.622424  276553 logs.go:282] 0 containers: []
	W1216 11:57:47.622435  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:47.622452  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:47.622476  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:47.682346  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:47.682389  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:47.698800  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:47.698846  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:47.794028  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:47.794056  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:47.794071  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:47.884842  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:47.884891  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:50.435627  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:50.452486  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:50.452569  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:50.487422  276553 cri.go:89] found id: ""
	I1216 11:57:50.487464  276553 logs.go:282] 0 containers: []
	W1216 11:57:50.487477  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:50.487487  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:50.487549  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:50.521415  276553 cri.go:89] found id: ""
	I1216 11:57:50.521459  276553 logs.go:282] 0 containers: []
	W1216 11:57:50.521471  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:50.521479  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:50.521552  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:50.554675  276553 cri.go:89] found id: ""
	I1216 11:57:50.554715  276553 logs.go:282] 0 containers: []
	W1216 11:57:50.554727  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:50.554735  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:50.554801  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:50.587836  276553 cri.go:89] found id: ""
	I1216 11:57:50.587869  276553 logs.go:282] 0 containers: []
	W1216 11:57:50.587880  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:50.587888  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:50.587950  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:50.620609  276553 cri.go:89] found id: ""
	I1216 11:57:50.620640  276553 logs.go:282] 0 containers: []
	W1216 11:57:50.620654  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:50.620662  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:50.620721  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:50.654766  276553 cri.go:89] found id: ""
	I1216 11:57:50.654807  276553 logs.go:282] 0 containers: []
	W1216 11:57:50.654818  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:50.654824  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:50.654876  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:50.688125  276553 cri.go:89] found id: ""
	I1216 11:57:50.688165  276553 logs.go:282] 0 containers: []
	W1216 11:57:50.688190  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:50.688197  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:50.688272  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:50.720674  276553 cri.go:89] found id: ""
	I1216 11:57:50.720706  276553 logs.go:282] 0 containers: []
	W1216 11:57:50.720714  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:50.720724  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:50.720741  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:50.758621  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:50.758663  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:50.806617  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:50.806661  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:50.819916  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:50.819952  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:50.890830  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:50.890860  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:50.890874  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:53.468200  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:53.481089  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:53.481171  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:53.521274  276553 cri.go:89] found id: ""
	I1216 11:57:53.521311  276553 logs.go:282] 0 containers: []
	W1216 11:57:53.521323  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:53.521333  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:53.521401  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:53.554735  276553 cri.go:89] found id: ""
	I1216 11:57:53.554772  276553 logs.go:282] 0 containers: []
	W1216 11:57:53.554785  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:53.554791  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:53.554861  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:53.586732  276553 cri.go:89] found id: ""
	I1216 11:57:53.586765  276553 logs.go:282] 0 containers: []
	W1216 11:57:53.586776  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:53.586783  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:53.586849  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:53.619625  276553 cri.go:89] found id: ""
	I1216 11:57:53.619661  276553 logs.go:282] 0 containers: []
	W1216 11:57:53.619675  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:53.619683  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:53.619748  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:53.662506  276553 cri.go:89] found id: ""
	I1216 11:57:53.662541  276553 logs.go:282] 0 containers: []
	W1216 11:57:53.662553  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:53.662563  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:53.662631  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:53.695845  276553 cri.go:89] found id: ""
	I1216 11:57:53.695876  276553 logs.go:282] 0 containers: []
	W1216 11:57:53.695884  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:53.695891  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:53.695943  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:53.731456  276553 cri.go:89] found id: ""
	I1216 11:57:53.731491  276553 logs.go:282] 0 containers: []
	W1216 11:57:53.731501  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:53.731507  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:53.731562  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:53.765605  276553 cri.go:89] found id: ""
	I1216 11:57:53.765639  276553 logs.go:282] 0 containers: []
	W1216 11:57:53.765651  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:53.765669  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:53.765684  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:53.819980  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:53.820024  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:53.834332  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:53.834370  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:53.909209  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:53.909233  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:53.909246  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:53.988800  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:53.988841  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:56.525152  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:56.537823  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:56.537897  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:56.572372  276553 cri.go:89] found id: ""
	I1216 11:57:56.572404  276553 logs.go:282] 0 containers: []
	W1216 11:57:56.572413  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:56.572419  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:56.572477  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:56.604059  276553 cri.go:89] found id: ""
	I1216 11:57:56.604087  276553 logs.go:282] 0 containers: []
	W1216 11:57:56.604095  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:56.604103  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:56.604164  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:56.636124  276553 cri.go:89] found id: ""
	I1216 11:57:56.636159  276553 logs.go:282] 0 containers: []
	W1216 11:57:56.636173  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:56.636181  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:56.636247  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:56.667678  276553 cri.go:89] found id: ""
	I1216 11:57:56.667711  276553 logs.go:282] 0 containers: []
	W1216 11:57:56.667723  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:56.667733  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:56.667800  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:56.698445  276553 cri.go:89] found id: ""
	I1216 11:57:56.698473  276553 logs.go:282] 0 containers: []
	W1216 11:57:56.698481  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:56.698488  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:56.698551  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:56.729367  276553 cri.go:89] found id: ""
	I1216 11:57:56.729401  276553 logs.go:282] 0 containers: []
	W1216 11:57:56.729417  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:56.729423  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:56.729481  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:56.766860  276553 cri.go:89] found id: ""
	I1216 11:57:56.766893  276553 logs.go:282] 0 containers: []
	W1216 11:57:56.766903  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:56.766911  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:56.766974  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:56.799460  276553 cri.go:89] found id: ""
	I1216 11:57:56.799496  276553 logs.go:282] 0 containers: []
	W1216 11:57:56.799509  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:56.799520  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:57:56.799534  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:57:56.848585  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:57:56.848634  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:57:56.862344  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:56.862381  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:56.939430  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:56.939459  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:56.939475  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:57:57.037599  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:57:57.037638  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:57:59.578173  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:57:59.593113  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:57:59.593205  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:57:59.632635  276553 cri.go:89] found id: ""
	I1216 11:57:59.632673  276553 logs.go:282] 0 containers: []
	W1216 11:57:59.632682  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:57:59.632688  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:57:59.632748  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:57:59.667060  276553 cri.go:89] found id: ""
	I1216 11:57:59.667099  276553 logs.go:282] 0 containers: []
	W1216 11:57:59.667109  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:57:59.667115  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:57:59.667176  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:57:59.709958  276553 cri.go:89] found id: ""
	I1216 11:57:59.709990  276553 logs.go:282] 0 containers: []
	W1216 11:57:59.710002  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:57:59.710009  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:57:59.710086  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:57:59.750536  276553 cri.go:89] found id: ""
	I1216 11:57:59.750570  276553 logs.go:282] 0 containers: []
	W1216 11:57:59.750581  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:57:59.750588  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:57:59.750641  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:57:59.787439  276553 cri.go:89] found id: ""
	I1216 11:57:59.787477  276553 logs.go:282] 0 containers: []
	W1216 11:57:59.787489  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:57:59.787496  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:57:59.787567  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:57:59.822064  276553 cri.go:89] found id: ""
	I1216 11:57:59.822095  276553 logs.go:282] 0 containers: []
	W1216 11:57:59.822107  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:57:59.822116  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:57:59.822184  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:57:59.857059  276553 cri.go:89] found id: ""
	I1216 11:57:59.857098  276553 logs.go:282] 0 containers: []
	W1216 11:57:59.857111  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:57:59.857119  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:57:59.857192  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:57:59.892705  276553 cri.go:89] found id: ""
	I1216 11:57:59.892736  276553 logs.go:282] 0 containers: []
	W1216 11:57:59.892746  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:57:59.892757  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:57:59.892772  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:57:59.971569  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:57:59.971587  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:57:59.971603  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:00.047921  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:00.047965  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:00.085014  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:00.085047  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:00.135937  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:00.135976  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:02.649277  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:02.661635  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:02.661699  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:02.693776  276553 cri.go:89] found id: ""
	I1216 11:58:02.693807  276553 logs.go:282] 0 containers: []
	W1216 11:58:02.693816  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:02.693822  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:02.693870  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:02.729195  276553 cri.go:89] found id: ""
	I1216 11:58:02.729229  276553 logs.go:282] 0 containers: []
	W1216 11:58:02.729242  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:02.729271  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:02.729333  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:02.760032  276553 cri.go:89] found id: ""
	I1216 11:58:02.760077  276553 logs.go:282] 0 containers: []
	W1216 11:58:02.760089  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:02.760100  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:02.760172  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:02.796642  276553 cri.go:89] found id: ""
	I1216 11:58:02.796680  276553 logs.go:282] 0 containers: []
	W1216 11:58:02.796691  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:02.796699  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:02.796766  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:02.828972  276553 cri.go:89] found id: ""
	I1216 11:58:02.829012  276553 logs.go:282] 0 containers: []
	W1216 11:58:02.829024  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:02.829032  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:02.829101  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:02.859746  276553 cri.go:89] found id: ""
	I1216 11:58:02.859783  276553 logs.go:282] 0 containers: []
	W1216 11:58:02.859795  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:02.859802  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:02.859868  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:02.891521  276553 cri.go:89] found id: ""
	I1216 11:58:02.891559  276553 logs.go:282] 0 containers: []
	W1216 11:58:02.891584  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:02.891592  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:02.891659  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:02.924786  276553 cri.go:89] found id: ""
	I1216 11:58:02.924818  276553 logs.go:282] 0 containers: []
	W1216 11:58:02.924826  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:02.924836  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:02.924853  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:03.000503  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:03.000526  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:03.000542  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:03.073091  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:03.073134  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:03.108654  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:03.108691  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:03.159180  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:03.159219  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:05.672157  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:05.684855  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:05.684936  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:05.720145  276553 cri.go:89] found id: ""
	I1216 11:58:05.720184  276553 logs.go:282] 0 containers: []
	W1216 11:58:05.720196  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:05.720204  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:05.720272  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:05.752882  276553 cri.go:89] found id: ""
	I1216 11:58:05.752922  276553 logs.go:282] 0 containers: []
	W1216 11:58:05.752935  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:05.752944  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:05.753029  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:05.788160  276553 cri.go:89] found id: ""
	I1216 11:58:05.788189  276553 logs.go:282] 0 containers: []
	W1216 11:58:05.788203  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:05.788212  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:05.788280  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:05.820225  276553 cri.go:89] found id: ""
	I1216 11:58:05.820269  276553 logs.go:282] 0 containers: []
	W1216 11:58:05.820282  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:05.820290  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:05.820348  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:05.852663  276553 cri.go:89] found id: ""
	I1216 11:58:05.852695  276553 logs.go:282] 0 containers: []
	W1216 11:58:05.852708  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:05.852715  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:05.852787  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:05.888245  276553 cri.go:89] found id: ""
	I1216 11:58:05.888275  276553 logs.go:282] 0 containers: []
	W1216 11:58:05.888284  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:05.888297  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:05.888351  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:05.920975  276553 cri.go:89] found id: ""
	I1216 11:58:05.921006  276553 logs.go:282] 0 containers: []
	W1216 11:58:05.921014  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:05.921020  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:05.921078  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:05.953202  276553 cri.go:89] found id: ""
	I1216 11:58:05.953242  276553 logs.go:282] 0 containers: []
	W1216 11:58:05.953255  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:05.953268  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:05.953284  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:06.000475  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:06.000518  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:06.013668  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:06.013702  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:06.082568  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:06.082606  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:06.082621  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:06.153125  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:06.153170  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:08.692849  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:08.705140  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:08.705206  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:08.745953  276553 cri.go:89] found id: ""
	I1216 11:58:08.745985  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.745994  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:08.746001  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:08.746053  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:08.777650  276553 cri.go:89] found id: ""
	I1216 11:58:08.777678  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.777686  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:08.777692  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:08.777753  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:08.810501  276553 cri.go:89] found id: ""
	I1216 11:58:08.810530  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.810541  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:08.810547  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:08.810602  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:08.843082  276553 cri.go:89] found id: ""
	I1216 11:58:08.843111  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.843120  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:08.843126  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:08.843175  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:08.875195  276553 cri.go:89] found id: ""
	I1216 11:58:08.875223  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.875232  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:08.875238  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:08.875308  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:08.907296  276553 cri.go:89] found id: ""
	I1216 11:58:08.907334  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.907346  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:08.907354  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:08.907409  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:08.939491  276553 cri.go:89] found id: ""
	I1216 11:58:08.939525  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.939537  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:08.939544  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:08.939607  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:08.970370  276553 cri.go:89] found id: ""
	I1216 11:58:08.970407  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.970420  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:08.970434  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:08.970452  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:08.983347  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:08.983393  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:09.057735  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:09.057765  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:09.057784  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:09.136549  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:09.136588  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:09.186771  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:09.186811  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:11.756641  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:11.776517  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:11.776588  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:11.813876  276553 cri.go:89] found id: ""
	I1216 11:58:11.813912  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.813925  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:11.813933  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:11.814000  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:11.850775  276553 cri.go:89] found id: ""
	I1216 11:58:11.850813  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.850825  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:11.850835  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:11.850894  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:11.881886  276553 cri.go:89] found id: ""
	I1216 11:58:11.881920  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.881933  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:11.881942  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:11.882008  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:11.913165  276553 cri.go:89] found id: ""
	I1216 11:58:11.913196  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.913209  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:11.913217  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:11.913279  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:11.945192  276553 cri.go:89] found id: ""
	I1216 11:58:11.945220  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.945231  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:11.945239  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:11.945297  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:11.977631  276553 cri.go:89] found id: ""
	I1216 11:58:11.977661  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.977673  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:11.977682  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:11.977755  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:12.009497  276553 cri.go:89] found id: ""
	I1216 11:58:12.009527  276553 logs.go:282] 0 containers: []
	W1216 11:58:12.009536  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:12.009546  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:12.009610  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:12.045501  276553 cri.go:89] found id: ""
	I1216 11:58:12.045524  276553 logs.go:282] 0 containers: []
	W1216 11:58:12.045534  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:12.045547  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:12.045564  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:12.114030  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:12.114057  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:12.114073  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:12.188314  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:12.188356  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:12.224600  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:12.224632  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:12.277641  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:12.277681  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:14.791934  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:14.805168  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:14.805255  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:14.837804  276553 cri.go:89] found id: ""
	I1216 11:58:14.837834  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.837898  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:14.837911  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:14.837976  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:14.871140  276553 cri.go:89] found id: ""
	I1216 11:58:14.871171  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.871183  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:14.871191  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:14.871254  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:14.903081  276553 cri.go:89] found id: ""
	I1216 11:58:14.903118  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.903127  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:14.903133  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:14.903196  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:14.942599  276553 cri.go:89] found id: ""
	I1216 11:58:14.942637  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.942650  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:14.942658  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:14.942723  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:14.981765  276553 cri.go:89] found id: ""
	I1216 11:58:14.981797  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.981809  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:14.981816  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:14.981878  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:15.020936  276553 cri.go:89] found id: ""
	I1216 11:58:15.020977  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.020987  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:15.020993  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:15.021052  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:15.053954  276553 cri.go:89] found id: ""
	I1216 11:58:15.053995  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.054008  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:15.054016  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:15.054081  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:15.088792  276553 cri.go:89] found id: ""
	I1216 11:58:15.088828  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.088839  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:15.088852  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:15.088867  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:15.143836  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:15.143873  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:15.162594  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:15.162637  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:15.252534  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:15.252562  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:15.252578  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:15.337849  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:15.337892  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:17.880680  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:17.893716  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:17.893807  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:17.928342  276553 cri.go:89] found id: ""
	I1216 11:58:17.928379  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.928394  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:17.928402  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:17.928468  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:17.964564  276553 cri.go:89] found id: ""
	I1216 11:58:17.964609  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.964618  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:17.964624  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:17.964677  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:17.999903  276553 cri.go:89] found id: ""
	I1216 11:58:17.999937  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.999946  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:17.999952  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:18.000011  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:18.042198  276553 cri.go:89] found id: ""
	I1216 11:58:18.042230  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.042243  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:18.042250  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:18.042314  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:18.078020  276553 cri.go:89] found id: ""
	I1216 11:58:18.078056  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.078070  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:18.078080  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:18.078154  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:18.111353  276553 cri.go:89] found id: ""
	I1216 11:58:18.111392  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.111404  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:18.111412  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:18.111485  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:18.147126  276553 cri.go:89] found id: ""
	I1216 11:58:18.147161  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.147172  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:18.147178  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:18.147245  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:18.181924  276553 cri.go:89] found id: ""
	I1216 11:58:18.181962  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.181974  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:18.181989  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:18.182007  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:18.235545  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:18.235588  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:18.251579  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:18.251610  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:18.316207  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:18.316238  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:18.316255  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:18.389630  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:18.389677  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:20.929592  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:20.944290  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:20.944382  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:20.991069  276553 cri.go:89] found id: ""
	I1216 11:58:20.991107  276553 logs.go:282] 0 containers: []
	W1216 11:58:20.991118  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:20.991126  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:20.991191  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:21.033257  276553 cri.go:89] found id: ""
	I1216 11:58:21.033291  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.033304  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:21.033311  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:21.033397  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:21.068318  276553 cri.go:89] found id: ""
	I1216 11:58:21.068357  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.068370  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:21.068378  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:21.068449  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:21.100812  276553 cri.go:89] found id: ""
	I1216 11:58:21.100847  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.100860  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:21.100867  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:21.100943  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:21.136004  276553 cri.go:89] found id: ""
	I1216 11:58:21.136037  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.136048  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:21.136054  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:21.136121  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:21.172785  276553 cri.go:89] found id: ""
	I1216 11:58:21.172825  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.172836  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:21.172842  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:21.172907  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:21.207325  276553 cri.go:89] found id: ""
	I1216 11:58:21.207381  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.207402  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:21.207413  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:21.207480  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:21.242438  276553 cri.go:89] found id: ""
	I1216 11:58:21.242479  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.242493  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:21.242508  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:21.242526  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:21.283025  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:21.283069  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:21.335930  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:21.335979  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:21.349370  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:21.349403  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:21.427874  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:21.427914  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:21.427932  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:24.015947  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:24.028721  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:24.028787  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:24.061707  276553 cri.go:89] found id: ""
	I1216 11:58:24.061736  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.061745  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:24.061751  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:24.061803  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:24.095657  276553 cri.go:89] found id: ""
	I1216 11:58:24.095687  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.095696  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:24.095702  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:24.095752  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:24.128755  276553 cri.go:89] found id: ""
	I1216 11:58:24.128784  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.128793  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:24.128799  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:24.128847  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:24.162145  276553 cri.go:89] found id: ""
	I1216 11:58:24.162180  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.162189  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:24.162194  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:24.162248  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:24.194650  276553 cri.go:89] found id: ""
	I1216 11:58:24.194689  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.194702  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:24.194709  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:24.194784  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:24.226091  276553 cri.go:89] found id: ""
	I1216 11:58:24.226127  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.226139  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:24.226147  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:24.226207  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:24.258140  276553 cri.go:89] found id: ""
	I1216 11:58:24.258184  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.258194  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:24.258200  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:24.258254  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:24.289916  276553 cri.go:89] found id: ""
	I1216 11:58:24.289948  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.289957  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:24.289969  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:24.289982  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:24.338070  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:24.338118  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:24.351201  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:24.351242  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:24.422998  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:24.423027  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:24.423039  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:24.499059  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:24.499113  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:27.036987  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:27.049417  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:27.049505  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:27.080923  276553 cri.go:89] found id: ""
	I1216 11:58:27.080951  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.080971  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:27.080980  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:27.081037  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:27.111686  276553 cri.go:89] found id: ""
	I1216 11:58:27.111717  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.111725  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:27.111731  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:27.111781  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:27.142935  276553 cri.go:89] found id: ""
	I1216 11:58:27.142966  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.142976  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:27.142984  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:27.143048  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:27.176277  276553 cri.go:89] found id: ""
	I1216 11:58:27.176309  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.176320  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:27.176326  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:27.176399  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:27.206698  276553 cri.go:89] found id: ""
	I1216 11:58:27.206733  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.206744  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:27.206752  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:27.206816  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:27.238188  276553 cri.go:89] found id: ""
	I1216 11:58:27.238225  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.238245  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:27.238253  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:27.238319  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:27.269646  276553 cri.go:89] found id: ""
	I1216 11:58:27.269678  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.269690  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:27.269697  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:27.269764  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:27.304992  276553 cri.go:89] found id: ""
	I1216 11:58:27.305022  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.305032  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:27.305042  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:27.305057  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:27.379755  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:27.379798  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:27.415958  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:27.415998  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:27.468345  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:27.468378  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:27.482879  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:27.482910  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:27.551153  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:30.052180  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:30.065848  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:30.065910  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:30.108387  276553 cri.go:89] found id: ""
	I1216 11:58:30.108418  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.108428  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:30.108436  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:30.108510  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:30.143956  276553 cri.go:89] found id: ""
	I1216 11:58:30.143997  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.144008  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:30.144014  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:30.144079  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:30.177213  276553 cri.go:89] found id: ""
	I1216 11:58:30.177250  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.177263  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:30.177272  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:30.177344  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:30.210808  276553 cri.go:89] found id: ""
	I1216 11:58:30.210846  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.210858  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:30.210867  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:30.210943  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:30.243895  276553 cri.go:89] found id: ""
	I1216 11:58:30.243935  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.243947  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:30.243955  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:30.244026  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:30.282295  276553 cri.go:89] found id: ""
	I1216 11:58:30.282335  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.282347  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:30.282355  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:30.282424  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:30.325096  276553 cri.go:89] found id: ""
	I1216 11:58:30.325127  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.325137  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:30.325146  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:30.325223  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:30.368651  276553 cri.go:89] found id: ""
	I1216 11:58:30.368688  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.368702  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:30.368715  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:30.368732  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:30.429442  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:30.429481  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:30.447157  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:30.447197  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:30.525823  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:30.525851  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:30.525876  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:30.619321  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:30.619374  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:33.167369  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:33.180007  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:33.180135  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:33.216102  276553 cri.go:89] found id: ""
	I1216 11:58:33.216139  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.216149  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:33.216156  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:33.216219  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:33.264290  276553 cri.go:89] found id: ""
	I1216 11:58:33.264331  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.264351  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:33.264360  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:33.264428  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:33.307400  276553 cri.go:89] found id: ""
	I1216 11:58:33.307440  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.307452  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:33.307461  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:33.307528  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:33.348555  276553 cri.go:89] found id: ""
	I1216 11:58:33.348597  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.348610  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:33.348619  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:33.348688  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:33.385255  276553 cri.go:89] found id: ""
	I1216 11:58:33.385286  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.385296  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:33.385303  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:33.385366  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:33.422656  276553 cri.go:89] found id: ""
	I1216 11:58:33.422701  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.422713  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:33.422722  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:33.422783  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:33.461547  276553 cri.go:89] found id: ""
	I1216 11:58:33.461582  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.461591  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:33.461601  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:33.461651  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:33.496893  276553 cri.go:89] found id: ""
	I1216 11:58:33.496935  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.496948  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:33.496987  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:33.497003  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:33.510577  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:33.510609  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:33.579037  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:33.579064  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:33.579080  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:33.657142  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:33.657178  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:33.703963  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:33.703993  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:36.255123  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:36.269198  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:36.269265  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:36.302149  276553 cri.go:89] found id: ""
	I1216 11:58:36.302189  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.302202  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:36.302210  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:36.302278  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:36.334332  276553 cri.go:89] found id: ""
	I1216 11:58:36.334367  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.334378  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:36.334386  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:36.334478  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:36.367219  276553 cri.go:89] found id: ""
	I1216 11:58:36.367251  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.367262  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:36.367271  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:36.367346  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:36.409111  276553 cri.go:89] found id: ""
	I1216 11:58:36.409142  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.409154  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:36.409162  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:36.409235  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:36.453572  276553 cri.go:89] found id: ""
	I1216 11:58:36.453612  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.453624  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:36.453639  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:36.453713  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:36.498382  276553 cri.go:89] found id: ""
	I1216 11:58:36.498420  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.498430  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:36.498445  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:36.498516  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:36.533177  276553 cri.go:89] found id: ""
	I1216 11:58:36.533213  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.533225  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:36.533234  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:36.533315  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:36.568180  276553 cri.go:89] found id: ""
	I1216 11:58:36.568219  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.568232  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:36.568247  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:36.568263  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:36.631684  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:36.631732  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:36.646177  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:36.646219  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:36.715265  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:36.715298  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:36.715360  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:36.795141  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:36.795187  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:39.333144  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:39.345528  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:39.345605  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:39.380984  276553 cri.go:89] found id: ""
	I1216 11:58:39.381022  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.381042  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:39.381050  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:39.381116  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:39.414143  276553 cri.go:89] found id: ""
	I1216 11:58:39.414179  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.414192  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:39.414200  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:39.414271  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:39.451080  276553 cri.go:89] found id: ""
	I1216 11:58:39.451113  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.451124  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:39.451133  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:39.451194  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:39.486555  276553 cri.go:89] found id: ""
	I1216 11:58:39.486585  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.486593  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:39.486599  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:39.486653  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:39.519626  276553 cri.go:89] found id: ""
	I1216 11:58:39.519663  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.519676  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:39.519683  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:39.519747  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:39.551678  276553 cri.go:89] found id: ""
	I1216 11:58:39.551717  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.551729  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:39.551736  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:39.551793  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:39.585498  276553 cri.go:89] found id: ""
	I1216 11:58:39.585536  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.585548  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:39.585556  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:39.585634  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:39.619904  276553 cri.go:89] found id: ""
	I1216 11:58:39.619941  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.619952  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:39.619967  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:39.619989  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:39.698641  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:39.698673  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:39.698690  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:39.790153  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:39.790199  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:39.836401  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:39.836438  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:39.887171  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:39.887217  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:42.400773  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:42.424070  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:42.424127  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:42.467053  276553 cri.go:89] found id: ""
	I1216 11:58:42.467092  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.467103  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:42.467110  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:42.467171  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:42.510214  276553 cri.go:89] found id: ""
	I1216 11:58:42.510248  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.510260  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:42.510268  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:42.510328  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:42.553938  276553 cri.go:89] found id: ""
	I1216 11:58:42.553974  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.553986  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:42.553994  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:42.554058  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:42.595174  276553 cri.go:89] found id: ""
	I1216 11:58:42.595208  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.595220  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:42.595228  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:42.595293  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:42.631184  276553 cri.go:89] found id: ""
	I1216 11:58:42.631219  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.631231  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:42.631240  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:42.631300  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:42.665302  276553 cri.go:89] found id: ""
	I1216 11:58:42.665328  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.665338  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:42.665346  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:42.665396  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:42.702222  276553 cri.go:89] found id: ""
	I1216 11:58:42.702249  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.702257  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:42.702263  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:42.702311  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:42.735627  276553 cri.go:89] found id: ""
	I1216 11:58:42.735658  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.735667  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:42.735676  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:42.735688  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:42.786111  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:42.786144  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:42.803378  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:42.803413  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:42.882160  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:42.882190  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:42.882207  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:42.969671  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:42.969707  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:45.512113  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:45.529025  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:45.529084  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:45.563665  276553 cri.go:89] found id: ""
	I1216 11:58:45.563697  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.563708  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:45.563717  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:45.563776  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:45.596079  276553 cri.go:89] found id: ""
	I1216 11:58:45.596119  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.596132  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:45.596140  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:45.596202  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:45.629014  276553 cri.go:89] found id: ""
	I1216 11:58:45.629042  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.629055  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:45.629062  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:45.629128  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:45.671688  276553 cri.go:89] found id: ""
	I1216 11:58:45.671714  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.671725  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:45.671733  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:45.671788  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:45.711944  276553 cri.go:89] found id: ""
	I1216 11:58:45.711977  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.711987  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:45.711994  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:45.712046  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:45.752121  276553 cri.go:89] found id: ""
	I1216 11:58:45.752155  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.752164  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:45.752170  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:45.752230  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:45.785470  276553 cri.go:89] found id: ""
	I1216 11:58:45.785499  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.785510  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:45.785518  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:45.785576  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:45.819346  276553 cri.go:89] found id: ""
	I1216 11:58:45.819374  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.819387  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:45.819399  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:45.819414  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:45.855153  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:45.855199  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:45.906709  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:45.906745  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:45.919757  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:45.919788  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:45.984752  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:45.984779  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:45.984798  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:48.559896  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:48.572393  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:48.572475  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:48.603458  276553 cri.go:89] found id: ""
	I1216 11:58:48.603496  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.603508  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:48.603516  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:48.603582  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:48.639883  276553 cri.go:89] found id: ""
	I1216 11:58:48.639920  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.639931  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:48.639938  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:48.640065  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:48.671045  276553 cri.go:89] found id: ""
	I1216 11:58:48.671070  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.671079  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:48.671085  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:48.671152  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:48.703295  276553 cri.go:89] found id: ""
	I1216 11:58:48.703341  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.703351  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:48.703360  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:48.703428  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:48.736411  276553 cri.go:89] found id: ""
	I1216 11:58:48.736442  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.736451  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:48.736457  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:48.736514  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:48.767332  276553 cri.go:89] found id: ""
	I1216 11:58:48.767375  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.767387  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:48.767396  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:48.767461  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:48.800080  276553 cri.go:89] found id: ""
	I1216 11:58:48.800112  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.800123  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:48.800131  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:48.800197  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:48.832760  276553 cri.go:89] found id: ""
	I1216 11:58:48.832802  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.832814  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:48.832826  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:48.832845  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:48.848815  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:48.848855  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:48.930771  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:48.930794  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:48.930808  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:49.005468  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:49.005511  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:49.040128  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:49.040166  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:51.591281  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:51.603590  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:51.603672  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:51.634226  276553 cri.go:89] found id: ""
	I1216 11:58:51.634255  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.634263  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:51.634270  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:51.634324  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:51.665685  276553 cri.go:89] found id: ""
	I1216 11:58:51.665718  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.665726  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:51.665732  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:51.665783  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:51.697159  276553 cri.go:89] found id: ""
	I1216 11:58:51.697192  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.697200  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:51.697206  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:51.697255  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:51.729513  276553 cri.go:89] found id: ""
	I1216 11:58:51.729543  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.729551  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:51.729556  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:51.729611  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:51.760525  276553 cri.go:89] found id: ""
	I1216 11:58:51.760559  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.760568  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:51.760574  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:51.760634  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:51.791787  276553 cri.go:89] found id: ""
	I1216 11:58:51.791824  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.791835  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:51.791844  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:51.791897  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:51.823131  276553 cri.go:89] found id: ""
	I1216 11:58:51.823166  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.823177  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:51.823186  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:51.823258  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:51.854638  276553 cri.go:89] found id: ""
	I1216 11:58:51.854675  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.854688  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:51.854699  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:51.854720  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:51.903207  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:51.903247  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:51.916182  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:51.916210  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:51.978879  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:51.978906  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:51.978918  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:52.054050  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:52.054087  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:54.592784  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:54.606444  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:54.606511  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:54.641053  276553 cri.go:89] found id: ""
	I1216 11:58:54.641094  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.641106  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:54.641114  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:54.641194  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:54.672984  276553 cri.go:89] found id: ""
	I1216 11:58:54.673018  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.673027  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:54.673032  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:54.673081  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:54.705118  276553 cri.go:89] found id: ""
	I1216 11:58:54.705144  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.705153  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:54.705159  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:54.705210  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:54.735744  276553 cri.go:89] found id: ""
	I1216 11:58:54.735778  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.735791  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:54.735798  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:54.735851  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:54.767983  276553 cri.go:89] found id: ""
	I1216 11:58:54.768012  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.768020  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:54.768027  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:54.768076  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:54.799412  276553 cri.go:89] found id: ""
	I1216 11:58:54.799440  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.799448  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:54.799455  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:54.799506  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:54.830329  276553 cri.go:89] found id: ""
	I1216 11:58:54.830357  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.830365  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:54.830371  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:54.830421  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:54.861544  276553 cri.go:89] found id: ""
	I1216 11:58:54.861573  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.861583  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:54.861593  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:54.861606  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:54.911522  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:54.911562  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:54.923947  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:54.923980  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:55.000816  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:55.000838  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:55.000854  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:55.072803  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:55.072845  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:57.608748  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:57.622071  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:57.622149  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:57.653826  276553 cri.go:89] found id: ""
	I1216 11:58:57.653863  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.653876  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:57.653885  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:57.653946  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:57.686809  276553 cri.go:89] found id: ""
	I1216 11:58:57.686839  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.686852  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:57.686860  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:57.686931  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:57.719565  276553 cri.go:89] found id: ""
	I1216 11:58:57.719601  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.719613  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:57.719622  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:57.719676  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:57.752279  276553 cri.go:89] found id: ""
	I1216 11:58:57.752318  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.752330  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:57.752339  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:57.752403  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:57.785915  276553 cri.go:89] found id: ""
	I1216 11:58:57.785949  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.785961  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:57.785969  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:57.786039  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:57.818703  276553 cri.go:89] found id: ""
	I1216 11:58:57.818734  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.818748  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:57.818754  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:57.818821  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:57.856323  276553 cri.go:89] found id: ""
	I1216 11:58:57.856362  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.856371  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:57.856377  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:57.856431  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:57.888461  276553 cri.go:89] found id: ""
	I1216 11:58:57.888507  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.888515  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:57.888526  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:57.888543  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:57.924744  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:57.924783  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:57.974915  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:57.974952  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:57.987702  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:57.987737  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:58.047740  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:58.047764  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:58.047779  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:59:00.624270  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:59:00.636790  276553 kubeadm.go:597] duration metric: took 4m2.920412851s to restartPrimaryControlPlane
	W1216 11:59:00.636868  276553 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 11:59:00.636890  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 11:59:01.078876  276553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:59:01.092675  276553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:59:01.102060  276553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:59:01.111330  276553 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:59:01.111353  276553 kubeadm.go:157] found existing configuration files:
	
	I1216 11:59:01.111396  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:59:01.120045  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:59:01.120110  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:59:01.128974  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:59:01.137554  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:59:01.137630  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:59:01.146493  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:59:01.154841  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:59:01.154904  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:59:01.163934  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:59:01.172584  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:59:01.172637  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:59:01.181391  276553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 11:59:01.369411  276553 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:00:57.257269  276553 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 12:00:57.257376  276553 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 12:00:57.258891  276553 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 12:00:57.258974  276553 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:00:57.259041  276553 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:00:57.259123  276553 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:00:57.259218  276553 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:00:57.259321  276553 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:00:57.262146  276553 out.go:235]   - Generating certificates and keys ...
	I1216 12:00:57.262267  276553 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:00:57.262347  276553 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:00:57.262465  276553 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:00:57.262571  276553 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:00:57.262667  276553 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:00:57.262717  276553 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:00:57.262791  276553 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:00:57.262860  276553 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:00:57.262924  276553 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:00:57.262996  276553 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:00:57.263030  276553 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:00:57.263084  276553 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:00:57.263135  276553 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:00:57.263181  276553 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:00:57.263235  276553 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:00:57.263281  276553 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:00:57.263373  276553 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:00:57.263445  276553 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:00:57.263481  276553 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:00:57.263542  276553 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:00:57.265255  276553 out.go:235]   - Booting up control plane ...
	I1216 12:00:57.265379  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:00:57.265453  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:00:57.265511  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:00:57.265629  276553 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:00:57.265768  276553 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:00:57.265811  276553 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 12:00:57.265917  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266078  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266159  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266350  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266437  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266649  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266712  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266895  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266973  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.267138  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.267150  276553 kubeadm.go:310] 
	I1216 12:00:57.267214  276553 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 12:00:57.267271  276553 kubeadm.go:310] 		timed out waiting for the condition
	I1216 12:00:57.267281  276553 kubeadm.go:310] 
	I1216 12:00:57.267334  276553 kubeadm.go:310] 	This error is likely caused by:
	I1216 12:00:57.267378  276553 kubeadm.go:310] 		- The kubelet is not running
	I1216 12:00:57.267488  276553 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 12:00:57.267499  276553 kubeadm.go:310] 
	I1216 12:00:57.267604  276553 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 12:00:57.267659  276553 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 12:00:57.267700  276553 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 12:00:57.267716  276553 kubeadm.go:310] 
	I1216 12:00:57.267867  276553 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 12:00:57.267965  276553 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 12:00:57.267976  276553 kubeadm.go:310] 
	I1216 12:00:57.268074  276553 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 12:00:57.268144  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 12:00:57.268210  276553 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 12:00:57.268279  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 12:00:57.268328  276553 kubeadm.go:310] 
	W1216 12:00:57.268428  276553 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 12:00:57.268489  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 12:00:57.717860  276553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 12:00:57.733963  276553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 12:00:57.744259  276553 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 12:00:57.744288  276553 kubeadm.go:157] found existing configuration files:
	
	I1216 12:00:57.744336  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 12:00:57.753893  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 12:00:57.753977  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 12:00:57.764071  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 12:00:57.773595  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 12:00:57.773682  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 12:00:57.783828  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 12:00:57.793769  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 12:00:57.793839  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 12:00:57.803766  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 12:00:57.813437  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 12:00:57.813513  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 12:00:57.823881  276553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 12:00:57.888749  276553 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 12:00:57.888835  276553 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:00:58.038785  276553 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:00:58.038916  276553 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:00:58.039088  276553 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:00:58.223884  276553 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:00:58.225611  276553 out.go:235]   - Generating certificates and keys ...
	I1216 12:00:58.225731  276553 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:00:58.225852  276553 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:00:58.225980  276553 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:00:58.226074  276553 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:00:58.226178  276553 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:00:58.226255  276553 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:00:58.226344  276553 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:00:58.226424  276553 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:00:58.226551  276553 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:00:58.226688  276553 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:00:58.226756  276553 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:00:58.226821  276553 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:00:58.353567  276553 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:00:58.694503  276553 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:00:58.792660  276553 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:00:59.086043  276553 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:00:59.108391  276553 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:00:59.108558  276553 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:00:59.108623  276553 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:00:59.247927  276553 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:00:59.249627  276553 out.go:235]   - Booting up control plane ...
	I1216 12:00:59.249774  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:00:59.251436  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:00:59.254163  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:00:59.257479  276553 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:00:59.261730  276553 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:01:39.263454  276553 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 12:01:39.263569  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:39.263847  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:01:44.264678  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:44.264927  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:01:54.265352  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:54.265639  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:14.265999  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:02:14.266235  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:54.265070  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:02:54.265312  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:54.265327  276553 kubeadm.go:310] 
	I1216 12:02:54.265385  276553 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 12:02:54.265445  276553 kubeadm.go:310] 		timed out waiting for the condition
	I1216 12:02:54.265455  276553 kubeadm.go:310] 
	I1216 12:02:54.265515  276553 kubeadm.go:310] 	This error is likely caused by:
	I1216 12:02:54.265563  276553 kubeadm.go:310] 		- The kubelet is not running
	I1216 12:02:54.265722  276553 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 12:02:54.265750  276553 kubeadm.go:310] 
	I1216 12:02:54.265890  276553 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 12:02:54.265936  276553 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 12:02:54.265973  276553 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 12:02:54.265995  276553 kubeadm.go:310] 
	I1216 12:02:54.266136  276553 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 12:02:54.266255  276553 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 12:02:54.266265  276553 kubeadm.go:310] 
	I1216 12:02:54.266405  276553 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 12:02:54.266530  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 12:02:54.266638  276553 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 12:02:54.266729  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 12:02:54.266748  276553 kubeadm.go:310] 
	I1216 12:02:54.267271  276553 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:02:54.267355  276553 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 12:02:54.267426  276553 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 12:02:54.267491  276553 kubeadm.go:394] duration metric: took 7m56.598620484s to StartCluster
	I1216 12:02:54.267542  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 12:02:54.267613  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 12:02:54.301812  276553 cri.go:89] found id: ""
	I1216 12:02:54.301847  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.301855  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 12:02:54.301863  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 12:02:54.301917  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 12:02:54.334730  276553 cri.go:89] found id: ""
	I1216 12:02:54.334768  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.334780  276553 logs.go:284] No container was found matching "etcd"
	I1216 12:02:54.334788  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 12:02:54.334853  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 12:02:54.366080  276553 cri.go:89] found id: ""
	I1216 12:02:54.366115  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.366128  276553 logs.go:284] No container was found matching "coredns"
	I1216 12:02:54.366136  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 12:02:54.366202  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 12:02:54.396447  276553 cri.go:89] found id: ""
	I1216 12:02:54.396483  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.396495  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 12:02:54.396503  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 12:02:54.396584  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 12:02:54.429291  276553 cri.go:89] found id: ""
	I1216 12:02:54.429326  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.429337  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 12:02:54.429345  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 12:02:54.429409  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 12:02:54.460235  276553 cri.go:89] found id: ""
	I1216 12:02:54.460268  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.460276  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 12:02:54.460283  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 12:02:54.460334  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 12:02:54.492739  276553 cri.go:89] found id: ""
	I1216 12:02:54.492771  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.492780  276553 logs.go:284] No container was found matching "kindnet"
	I1216 12:02:54.492787  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 12:02:54.492840  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 12:02:54.524322  276553 cri.go:89] found id: ""
	I1216 12:02:54.524358  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.524369  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 12:02:54.524384  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 12:02:54.524400  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:02:54.575979  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 12:02:54.576022  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:02:54.591148  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:02:54.591184  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 12:02:54.704231  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 12:02:54.704259  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 12:02:54.704277  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 12:02:54.804001  276553 logs.go:123] Gathering logs for container status ...
	I1216 12:02:54.804047  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 12:02:54.842021  276553 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 12:02:54.842097  276553 out.go:270] * 
	* 
	W1216 12:02:54.842173  276553 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 12:02:54.842192  276553 out.go:270] * 
	* 
	W1216 12:02:54.843372  276553 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:02:54.847542  276553 out.go:201] 
	W1216 12:02:54.848991  276553 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 12:02:54.849037  276553 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 12:02:54.849054  276553 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 12:02:54.850514  276553 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-933974 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 2 (241.129975ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-933974 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-987169 image list                          | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| delete  | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| start   | -p newest-cni-409154 --memory=2200 --alsologtostderr   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-935544                           | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	| image   | no-preload-181484 image list                           | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| delete  | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| addons  | enable metrics-server -p newest-cni-409154             | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-409154                  | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-409154 --memory=2200 --alsologtostderr   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-409154 image list                           | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	| delete  | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:58:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:58:10.457214  279095 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:58:10.457320  279095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:58:10.457328  279095 out.go:358] Setting ErrFile to fd 2...
	I1216 11:58:10.457332  279095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:58:10.457523  279095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:58:10.458091  279095 out.go:352] Setting JSON to false
	I1216 11:58:10.459068  279095 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13237,"bootTime":1734337053,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 11:58:10.459136  279095 start.go:139] virtualization: kvm guest
	I1216 11:58:10.461398  279095 out.go:177] * [newest-cni-409154] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 11:58:10.462722  279095 notify.go:220] Checking for updates...
	I1216 11:58:10.462776  279095 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:58:10.464205  279095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:58:10.465623  279095 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:58:10.466987  279095 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:58:10.468240  279095 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 11:58:10.469465  279095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:58:10.470955  279095 config.go:182] Loaded profile config "newest-cni-409154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:58:10.471351  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:10.471415  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:10.486592  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
	I1216 11:58:10.487085  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:10.487663  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:10.487693  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:10.488179  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:10.488439  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:10.488761  279095 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:58:10.489224  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:10.489296  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:10.505146  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I1216 11:58:10.505678  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:10.506233  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:10.506264  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:10.506714  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:10.506902  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:10.544395  279095 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 11:58:10.545779  279095 start.go:297] selected driver: kvm2
	I1216 11:58:10.545792  279095 start.go:901] validating driver "kvm2" against &{Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:58:10.545905  279095 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:58:10.546668  279095 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:58:10.546758  279095 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 11:58:10.563076  279095 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 11:58:10.563675  279095 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 11:58:10.563714  279095 cni.go:84] Creating CNI manager for ""
	I1216 11:58:10.563781  279095 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:58:10.563837  279095 start.go:340] cluster config:
	{Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-409154 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeR
equested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:58:10.564033  279095 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:58:10.565811  279095 out.go:177] * Starting "newest-cni-409154" primary control-plane node in "newest-cni-409154" cluster
	I1216 11:58:10.567051  279095 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:58:10.567086  279095 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1216 11:58:10.567099  279095 cache.go:56] Caching tarball of preloaded images
	I1216 11:58:10.567176  279095 preload.go:172] Found /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 11:58:10.567186  279095 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1216 11:58:10.567281  279095 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/config.json ...
	I1216 11:58:10.567464  279095 start.go:360] acquireMachinesLock for newest-cni-409154: {Name:mk416695f38f04563a2ca155c5383ee4d8f0f97b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 11:58:10.567508  279095 start.go:364] duration metric: took 24.753µs to acquireMachinesLock for "newest-cni-409154"
	I1216 11:58:10.567522  279095 start.go:96] Skipping create...Using existing machine configuration
	I1216 11:58:10.567530  279095 fix.go:54] fixHost starting: 
	I1216 11:58:10.567819  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:10.567855  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:10.582641  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46159
	I1216 11:58:10.583122  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:10.583779  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:10.583807  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:10.584109  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:10.584302  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:10.584447  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:10.585895  279095 fix.go:112] recreateIfNeeded on newest-cni-409154: state=Stopped err=<nil>
	I1216 11:58:10.585928  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	W1216 11:58:10.586110  279095 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 11:58:10.587967  279095 out.go:177] * Restarting existing kvm2 VM for "newest-cni-409154" ...
	I1216 11:58:08.692849  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:08.705140  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:08.705206  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:08.745953  276553 cri.go:89] found id: ""
	I1216 11:58:08.745985  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.745994  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:08.746001  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:08.746053  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:08.777650  276553 cri.go:89] found id: ""
	I1216 11:58:08.777678  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.777686  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:08.777692  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:08.777753  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:08.810501  276553 cri.go:89] found id: ""
	I1216 11:58:08.810530  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.810541  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:08.810547  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:08.810602  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:08.843082  276553 cri.go:89] found id: ""
	I1216 11:58:08.843111  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.843120  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:08.843126  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:08.843175  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:08.875195  276553 cri.go:89] found id: ""
	I1216 11:58:08.875223  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.875232  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:08.875238  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:08.875308  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:08.907296  276553 cri.go:89] found id: ""
	I1216 11:58:08.907334  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.907346  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:08.907354  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:08.907409  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:08.939491  276553 cri.go:89] found id: ""
	I1216 11:58:08.939525  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.939537  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:08.939544  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:08.939607  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:08.970370  276553 cri.go:89] found id: ""
	I1216 11:58:08.970407  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.970420  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:08.970434  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:08.970452  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:08.983347  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:08.983393  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:09.057735  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:09.057765  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:09.057784  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:09.136549  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:09.136588  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:09.186771  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:09.186811  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:11.756641  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:11.776517  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:11.776588  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:11.813876  276553 cri.go:89] found id: ""
	I1216 11:58:11.813912  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.813925  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:11.813933  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:11.814000  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:11.850775  276553 cri.go:89] found id: ""
	I1216 11:58:11.850813  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.850825  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:11.850835  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:11.850894  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:11.881886  276553 cri.go:89] found id: ""
	I1216 11:58:11.881920  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.881933  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:11.881942  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:11.882008  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:11.913165  276553 cri.go:89] found id: ""
	I1216 11:58:11.913196  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.913209  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:11.913217  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:11.913279  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:11.945192  276553 cri.go:89] found id: ""
	I1216 11:58:11.945220  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.945231  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:11.945239  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:11.945297  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:11.977631  276553 cri.go:89] found id: ""
	I1216 11:58:11.977661  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.977673  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:11.977682  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:11.977755  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:12.009497  276553 cri.go:89] found id: ""
	I1216 11:58:12.009527  276553 logs.go:282] 0 containers: []
	W1216 11:58:12.009536  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:12.009546  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:12.009610  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:12.045501  276553 cri.go:89] found id: ""
	I1216 11:58:12.045524  276553 logs.go:282] 0 containers: []
	W1216 11:58:12.045534  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:12.045547  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:12.045564  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:12.114030  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:12.114057  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:12.114073  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:12.188314  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:12.188356  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:12.224600  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:12.224632  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:12.277641  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:12.277681  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:10.589206  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Start
	I1216 11:58:10.589378  279095 main.go:141] libmachine: (newest-cni-409154) starting domain...
	I1216 11:58:10.589402  279095 main.go:141] libmachine: (newest-cni-409154) ensuring networks are active...
	I1216 11:58:10.590045  279095 main.go:141] libmachine: (newest-cni-409154) Ensuring network default is active
	I1216 11:58:10.590345  279095 main.go:141] libmachine: (newest-cni-409154) Ensuring network mk-newest-cni-409154 is active
	I1216 11:58:10.590691  279095 main.go:141] libmachine: (newest-cni-409154) getting domain XML...
	I1216 11:58:10.591328  279095 main.go:141] libmachine: (newest-cni-409154) creating domain...
	I1216 11:58:11.793966  279095 main.go:141] libmachine: (newest-cni-409154) waiting for IP...
	I1216 11:58:11.795095  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:11.795603  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:11.795695  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:11.795591  279132 retry.go:31] will retry after 244.170622ms: waiting for domain to come up
	I1216 11:58:12.041392  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:12.042035  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:12.042065  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:12.042003  279132 retry.go:31] will retry after 378.076417ms: waiting for domain to come up
	I1216 11:58:12.421749  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:12.422240  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:12.422267  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:12.422216  279132 retry.go:31] will retry after 370.938245ms: waiting for domain to come up
	I1216 11:58:12.794930  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:12.795410  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:12.795430  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:12.795372  279132 retry.go:31] will retry after 380.56228ms: waiting for domain to come up
	I1216 11:58:13.177977  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:13.178564  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:13.178597  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:13.178500  279132 retry.go:31] will retry after 582.330697ms: waiting for domain to come up
	I1216 11:58:13.762033  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:13.762664  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:13.762701  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:13.762593  279132 retry.go:31] will retry after 600.533428ms: waiting for domain to come up
	I1216 11:58:14.364374  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:14.364791  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:14.364828  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:14.364752  279132 retry.go:31] will retry after 773.596823ms: waiting for domain to come up
	I1216 11:58:15.139784  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:15.140270  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:15.140300  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:15.140224  279132 retry.go:31] will retry after 1.264403571s: waiting for domain to come up
	I1216 11:58:14.791934  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:14.805168  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:14.805255  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:14.837804  276553 cri.go:89] found id: ""
	I1216 11:58:14.837834  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.837898  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:14.837911  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:14.837976  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:14.871140  276553 cri.go:89] found id: ""
	I1216 11:58:14.871171  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.871183  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:14.871191  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:14.871254  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:14.903081  276553 cri.go:89] found id: ""
	I1216 11:58:14.903118  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.903127  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:14.903133  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:14.903196  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:14.942599  276553 cri.go:89] found id: ""
	I1216 11:58:14.942637  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.942650  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:14.942658  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:14.942723  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:14.981765  276553 cri.go:89] found id: ""
	I1216 11:58:14.981797  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.981809  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:14.981816  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:14.981878  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:15.020936  276553 cri.go:89] found id: ""
	I1216 11:58:15.020977  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.020987  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:15.020993  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:15.021052  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:15.053954  276553 cri.go:89] found id: ""
	I1216 11:58:15.053995  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.054008  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:15.054016  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:15.054081  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:15.088792  276553 cri.go:89] found id: ""
	I1216 11:58:15.088828  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.088839  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:15.088852  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:15.088867  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:15.143836  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:15.143873  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:15.162594  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:15.162637  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:15.252534  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:15.252562  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:15.252578  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:15.337849  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:15.337892  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:17.880680  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:17.893716  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:17.893807  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:17.928342  276553 cri.go:89] found id: ""
	I1216 11:58:17.928379  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.928394  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:17.928402  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:17.928468  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:17.964564  276553 cri.go:89] found id: ""
	I1216 11:58:17.964609  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.964618  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:17.964624  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:17.964677  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:16.406244  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:16.406755  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:16.406782  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:16.406707  279132 retry.go:31] will retry after 1.148140994s: waiting for domain to come up
	I1216 11:58:17.557073  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:17.557603  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:17.557625  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:17.557562  279132 retry.go:31] will retry after 1.49928484s: waiting for domain to come up
	I1216 11:58:19.058022  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:19.058469  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:19.058493  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:19.058429  279132 retry.go:31] will retry after 1.785857688s: waiting for domain to come up
	I1216 11:58:17.999903  276553 cri.go:89] found id: ""
	I1216 11:58:17.999937  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.999946  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:17.999952  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:18.000011  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:18.042198  276553 cri.go:89] found id: ""
	I1216 11:58:18.042230  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.042243  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:18.042250  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:18.042314  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:18.078020  276553 cri.go:89] found id: ""
	I1216 11:58:18.078056  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.078070  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:18.078080  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:18.078154  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:18.111353  276553 cri.go:89] found id: ""
	I1216 11:58:18.111392  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.111404  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:18.111412  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:18.111485  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:18.147126  276553 cri.go:89] found id: ""
	I1216 11:58:18.147161  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.147172  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:18.147178  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:18.147245  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:18.181924  276553 cri.go:89] found id: ""
	I1216 11:58:18.181962  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.181974  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:18.181989  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:18.182007  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:18.235545  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:18.235588  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:18.251579  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:18.251610  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:18.316207  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:18.316238  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:18.316255  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:18.389630  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:18.389677  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:20.929592  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:20.944290  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:20.944382  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:20.991069  276553 cri.go:89] found id: ""
	I1216 11:58:20.991107  276553 logs.go:282] 0 containers: []
	W1216 11:58:20.991118  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:20.991126  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:20.991191  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:21.033257  276553 cri.go:89] found id: ""
	I1216 11:58:21.033291  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.033304  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:21.033311  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:21.033397  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:21.068318  276553 cri.go:89] found id: ""
	I1216 11:58:21.068357  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.068370  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:21.068378  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:21.068449  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:21.100812  276553 cri.go:89] found id: ""
	I1216 11:58:21.100847  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.100860  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:21.100867  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:21.100943  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:21.136004  276553 cri.go:89] found id: ""
	I1216 11:58:21.136037  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.136048  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:21.136054  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:21.136121  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:21.172785  276553 cri.go:89] found id: ""
	I1216 11:58:21.172825  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.172836  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:21.172842  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:21.172907  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:21.207325  276553 cri.go:89] found id: ""
	I1216 11:58:21.207381  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.207402  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:21.207413  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:21.207480  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:21.242438  276553 cri.go:89] found id: ""
	I1216 11:58:21.242479  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.242493  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:21.242508  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:21.242526  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:21.283025  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:21.283069  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:21.335930  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:21.335979  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:21.349370  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:21.349403  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:21.427874  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:21.427914  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:21.427932  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:20.846031  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:20.846581  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:20.846631  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:20.846572  279132 retry.go:31] will retry after 2.9103898s: waiting for domain to come up
	I1216 11:58:23.760767  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:23.761253  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:23.761287  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:23.761188  279132 retry.go:31] will retry after 3.698063043s: waiting for domain to come up
	I1216 11:58:24.015947  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:24.028721  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:24.028787  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:24.061707  276553 cri.go:89] found id: ""
	I1216 11:58:24.061736  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.061745  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:24.061751  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:24.061803  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:24.095657  276553 cri.go:89] found id: ""
	I1216 11:58:24.095687  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.095696  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:24.095702  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:24.095752  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:24.128755  276553 cri.go:89] found id: ""
	I1216 11:58:24.128784  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.128793  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:24.128799  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:24.128847  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:24.162145  276553 cri.go:89] found id: ""
	I1216 11:58:24.162180  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.162189  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:24.162194  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:24.162248  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:24.194650  276553 cri.go:89] found id: ""
	I1216 11:58:24.194689  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.194702  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:24.194709  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:24.194784  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:24.226091  276553 cri.go:89] found id: ""
	I1216 11:58:24.226127  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.226139  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:24.226147  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:24.226207  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:24.258140  276553 cri.go:89] found id: ""
	I1216 11:58:24.258184  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.258194  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:24.258200  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:24.258254  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:24.289916  276553 cri.go:89] found id: ""
	I1216 11:58:24.289948  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.289957  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:24.289969  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:24.289982  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:24.338070  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:24.338118  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:24.351201  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:24.351242  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:24.422998  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:24.423027  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:24.423039  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:24.499059  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:24.499113  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:27.036987  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:27.049417  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:27.049505  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:27.080923  276553 cri.go:89] found id: ""
	I1216 11:58:27.080951  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.080971  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:27.080980  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:27.081037  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:27.111686  276553 cri.go:89] found id: ""
	I1216 11:58:27.111717  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.111725  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:27.111731  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:27.111781  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:27.142935  276553 cri.go:89] found id: ""
	I1216 11:58:27.142966  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.142976  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:27.142984  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:27.143048  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:27.176277  276553 cri.go:89] found id: ""
	I1216 11:58:27.176309  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.176320  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:27.176326  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:27.176399  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:27.206698  276553 cri.go:89] found id: ""
	I1216 11:58:27.206733  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.206744  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:27.206752  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:27.206816  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:27.238188  276553 cri.go:89] found id: ""
	I1216 11:58:27.238225  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.238245  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:27.238253  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:27.238319  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:27.269646  276553 cri.go:89] found id: ""
	I1216 11:58:27.269678  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.269690  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:27.269697  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:27.269764  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:27.304992  276553 cri.go:89] found id: ""
	I1216 11:58:27.305022  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.305032  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:27.305042  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:27.305057  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:27.379755  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:27.379798  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:27.415958  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:27.415998  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:27.468345  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:27.468378  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:27.482879  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:27.482910  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:27.551153  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:27.461758  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.462297  279095 main.go:141] libmachine: (newest-cni-409154) found domain IP: 192.168.39.202
	I1216 11:58:27.462330  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has current primary IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.462345  279095 main.go:141] libmachine: (newest-cni-409154) reserving static IP address...
	I1216 11:58:27.462706  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "newest-cni-409154", mac: "52:54:00:23:fb:52", ip: "192.168.39.202"} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.462733  279095 main.go:141] libmachine: (newest-cni-409154) DBG | skip adding static IP to network mk-newest-cni-409154 - found existing host DHCP lease matching {name: "newest-cni-409154", mac: "52:54:00:23:fb:52", ip: "192.168.39.202"}
	I1216 11:58:27.462751  279095 main.go:141] libmachine: (newest-cni-409154) reserved static IP address 192.168.39.202 for domain newest-cni-409154
	I1216 11:58:27.462761  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Getting to WaitForSSH function...
	I1216 11:58:27.462769  279095 main.go:141] libmachine: (newest-cni-409154) waiting for SSH...
	I1216 11:58:27.464970  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.465299  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.465323  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.465446  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Using SSH client type: external
	I1216 11:58:27.465486  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Using SSH private key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa (-rw-------)
	I1216 11:58:27.465535  279095 main.go:141] libmachine: (newest-cni-409154) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 11:58:27.465568  279095 main.go:141] libmachine: (newest-cni-409154) DBG | About to run SSH command:
	I1216 11:58:27.465586  279095 main.go:141] libmachine: (newest-cni-409154) DBG | exit 0
	I1216 11:58:27.589004  279095 main.go:141] libmachine: (newest-cni-409154) DBG | SSH cmd err, output: <nil>: 
	I1216 11:58:27.589479  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetConfigRaw
	I1216 11:58:27.590146  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:27.592843  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.593292  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.593326  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.593571  279095 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/config.json ...
	I1216 11:58:27.593797  279095 machine.go:93] provisionDockerMachine start ...
	I1216 11:58:27.593817  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:27.594055  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.597195  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.597567  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.597598  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.597715  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:27.597907  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.598105  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.598253  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:27.598462  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:27.598720  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:27.598734  279095 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 11:58:27.697242  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 11:58:27.697284  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetMachineName
	I1216 11:58:27.697579  279095 buildroot.go:166] provisioning hostname "newest-cni-409154"
	I1216 11:58:27.697618  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetMachineName
	I1216 11:58:27.697818  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.700788  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.701199  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.701231  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.701465  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:27.701659  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.701868  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.701996  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:27.702154  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:27.702385  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:27.702412  279095 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-409154 && echo "newest-cni-409154" | sudo tee /etc/hostname
	I1216 11:58:27.810794  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-409154
	
	I1216 11:58:27.810827  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.813678  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.814176  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.814219  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.814350  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:27.814559  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.814706  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.814856  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:27.815025  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:27.815211  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:27.815227  279095 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-409154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-409154/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-409154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 11:58:27.921763  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:58:27.921799  279095 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20107-210204/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-210204/.minikube}
	I1216 11:58:27.921855  279095 buildroot.go:174] setting up certificates
	I1216 11:58:27.921869  279095 provision.go:84] configureAuth start
	I1216 11:58:27.921885  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetMachineName
	I1216 11:58:27.922180  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:27.924925  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.925273  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.925305  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.925452  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.927662  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.927976  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.928006  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.928167  279095 provision.go:143] copyHostCerts
	I1216 11:58:27.928234  279095 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem, removing ...
	I1216 11:58:27.928247  279095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem
	I1216 11:58:27.928329  279095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem (1078 bytes)
	I1216 11:58:27.928444  279095 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem, removing ...
	I1216 11:58:27.928456  279095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem
	I1216 11:58:27.928491  279095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem (1123 bytes)
	I1216 11:58:27.928836  279095 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem, removing ...
	I1216 11:58:27.928995  279095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem
	I1216 11:58:27.929057  279095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem (1679 bytes)
	I1216 11:58:27.929198  279095 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem org=jenkins.newest-cni-409154 san=[127.0.0.1 192.168.39.202 localhost minikube newest-cni-409154]
	I1216 11:58:28.119927  279095 provision.go:177] copyRemoteCerts
	I1216 11:58:28.119993  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 11:58:28.120033  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.122642  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.122863  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.122888  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.123099  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.123312  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.123510  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.123639  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.203158  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 11:58:28.230017  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 11:58:28.255874  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 11:58:28.281032  279095 provision.go:87] duration metric: took 359.143013ms to configureAuth
	I1216 11:58:28.281064  279095 buildroot.go:189] setting minikube options for container-runtime
	I1216 11:58:28.281272  279095 config.go:182] Loaded profile config "newest-cni-409154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:58:28.281381  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.283867  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.284173  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.284205  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.284362  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.284586  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.284761  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.284907  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.285075  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:28.285289  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:28.285311  279095 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 11:58:28.493363  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 11:58:28.493395  279095 machine.go:96] duration metric: took 899.585204ms to provisionDockerMachine
	I1216 11:58:28.493418  279095 start.go:293] postStartSetup for "newest-cni-409154" (driver="kvm2")
	I1216 11:58:28.493435  279095 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 11:58:28.493464  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.493804  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 11:58:28.493837  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.496887  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.497252  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.497305  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.497551  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.497781  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.497974  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.498122  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.575105  279095 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 11:58:28.579116  279095 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 11:58:28.579146  279095 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/addons for local assets ...
	I1216 11:58:28.579210  279095 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/files for local assets ...
	I1216 11:58:28.579283  279095 filesync.go:149] local asset: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem -> 2175192.pem in /etc/ssl/certs
	I1216 11:58:28.579384  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 11:58:28.588438  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:58:28.611018  279095 start.go:296] duration metric: took 117.581046ms for postStartSetup
	I1216 11:58:28.611076  279095 fix.go:56] duration metric: took 18.043540567s for fixHost
	I1216 11:58:28.611100  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.614398  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.614793  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.614826  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.615084  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.615326  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.615523  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.615719  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.615908  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:28.616090  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:28.616105  279095 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 11:58:28.717339  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734350308.689557050
	
	I1216 11:58:28.717371  279095 fix.go:216] guest clock: 1734350308.689557050
	I1216 11:58:28.717382  279095 fix.go:229] Guest: 2024-12-16 11:58:28.68955705 +0000 UTC Remote: 2024-12-16 11:58:28.611080616 +0000 UTC m=+18.193007687 (delta=78.476434ms)
	I1216 11:58:28.717413  279095 fix.go:200] guest clock delta is within tolerance: 78.476434ms
	I1216 11:58:28.717419  279095 start.go:83] releasing machines lock for "newest-cni-409154", held for 18.149901468s
	I1216 11:58:28.717440  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.717739  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:28.720755  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.721190  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.721220  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.721383  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.721877  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.722040  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.722130  279095 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 11:58:28.722179  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.722312  279095 ssh_runner.go:195] Run: cat /version.json
	I1216 11:58:28.722337  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.724752  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725087  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.725113  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725133  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725285  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.725472  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.725600  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.725623  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725634  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.725803  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.725790  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.725944  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.726118  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.726278  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.798361  279095 ssh_runner.go:195] Run: systemctl --version
	I1216 11:58:28.823281  279095 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 11:58:28.965469  279095 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 11:58:28.970957  279095 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 11:58:28.971032  279095 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:58:28.986070  279095 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 11:58:28.986095  279095 start.go:495] detecting cgroup driver to use...
	I1216 11:58:28.986168  279095 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 11:58:29.002166  279095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 11:58:29.015245  279095 docker.go:217] disabling cri-docker service (if available) ...
	I1216 11:58:29.015357  279095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 11:58:29.028270  279095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 11:58:29.040809  279095 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 11:58:29.153768  279095 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 11:58:29.296765  279095 docker.go:233] disabling docker service ...
	I1216 11:58:29.296853  279095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 11:58:29.310642  279095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 11:58:29.322968  279095 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 11:58:29.458651  279095 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 11:58:29.569319  279095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 11:58:29.583488  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 11:58:29.602278  279095 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 11:58:29.602346  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.612191  279095 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 11:58:29.612256  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.621862  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.631438  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.641222  279095 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 11:58:29.652611  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.663073  279095 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.679545  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.690214  279095 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 11:58:29.699851  279095 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 11:58:29.699926  279095 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 11:58:29.713189  279095 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 11:58:29.722840  279095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:58:29.848101  279095 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 11:58:29.935007  279095 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 11:58:29.935088  279095 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 11:58:29.939824  279095 start.go:563] Will wait 60s for crictl version
	I1216 11:58:29.939910  279095 ssh_runner.go:195] Run: which crictl
	I1216 11:58:29.943491  279095 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 11:58:29.980696  279095 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 11:58:29.980807  279095 ssh_runner.go:195] Run: crio --version
	I1216 11:58:30.009245  279095 ssh_runner.go:195] Run: crio --version
	I1216 11:58:30.038597  279095 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1216 11:58:30.040039  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:30.042931  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:30.043287  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:30.043320  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:30.043662  279095 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 11:58:30.047939  279095 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:58:30.062384  279095 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 11:58:30.063947  279095 kubeadm.go:883] updating cluster {Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 11:58:30.064099  279095 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:58:30.064174  279095 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:58:30.110756  279095 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1216 11:58:30.110842  279095 ssh_runner.go:195] Run: which lz4
	I1216 11:58:30.115974  279095 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 11:58:30.120455  279095 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 11:58:30.120505  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1216 11:58:30.052180  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:30.065848  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:30.065910  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:30.108387  276553 cri.go:89] found id: ""
	I1216 11:58:30.108418  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.108428  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:30.108436  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:30.108510  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:30.143956  276553 cri.go:89] found id: ""
	I1216 11:58:30.143997  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.144008  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:30.144014  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:30.144079  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:30.177213  276553 cri.go:89] found id: ""
	I1216 11:58:30.177250  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.177263  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:30.177272  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:30.177344  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:30.210808  276553 cri.go:89] found id: ""
	I1216 11:58:30.210846  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.210858  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:30.210867  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:30.210943  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:30.243895  276553 cri.go:89] found id: ""
	I1216 11:58:30.243935  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.243947  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:30.243955  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:30.244026  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:30.282295  276553 cri.go:89] found id: ""
	I1216 11:58:30.282335  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.282347  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:30.282355  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:30.282424  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:30.325096  276553 cri.go:89] found id: ""
	I1216 11:58:30.325127  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.325137  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:30.325146  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:30.325223  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:30.368651  276553 cri.go:89] found id: ""
	I1216 11:58:30.368688  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.368702  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:30.368715  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:30.368732  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:30.429442  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:30.429481  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:30.447157  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:30.447197  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:30.525823  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:30.525851  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:30.525876  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:30.619321  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:30.619374  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:31.365838  279095 crio.go:462] duration metric: took 1.249888265s to copy over tarball
	I1216 11:58:31.365939  279095 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 11:58:33.464744  279095 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.098774355s)
	I1216 11:58:33.464768  279095 crio.go:469] duration metric: took 2.098894697s to extract the tarball
	I1216 11:58:33.464775  279095 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 11:58:33.502605  279095 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:58:33.552519  279095 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 11:58:33.552546  279095 cache_images.go:84] Images are preloaded, skipping loading
	I1216 11:58:33.552564  279095 kubeadm.go:934] updating node { 192.168.39.202 8443 v1.31.2 crio true true} ...
	I1216 11:58:33.552695  279095 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-409154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 11:58:33.552789  279095 ssh_runner.go:195] Run: crio config
	I1216 11:58:33.599280  279095 cni.go:84] Creating CNI manager for ""
	I1216 11:58:33.599316  279095 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:58:33.599330  279095 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1216 11:58:33.599369  279095 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-409154 NodeName:newest-cni-409154 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 11:58:33.599559  279095 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-409154"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.202"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 11:58:33.599635  279095 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1216 11:58:33.611454  279095 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 11:58:33.611560  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 11:58:33.620442  279095 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1216 11:58:33.636061  279095 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 11:58:33.651452  279095 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1216 11:58:33.667434  279095 ssh_runner.go:195] Run: grep 192.168.39.202	control-plane.minikube.internal$ /etc/hosts
	I1216 11:58:33.672022  279095 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:58:33.688407  279095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:58:33.825530  279095 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:58:33.842084  279095 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154 for IP: 192.168.39.202
	I1216 11:58:33.842119  279095 certs.go:194] generating shared ca certs ...
	I1216 11:58:33.842143  279095 certs.go:226] acquiring lock for ca certs: {Name:mkd97e0b0fea726104cb674caf81491f1144961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:33.842348  279095 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key
	I1216 11:58:33.842417  279095 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key
	I1216 11:58:33.842433  279095 certs.go:256] generating profile certs ...
	I1216 11:58:33.842546  279095 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/client.key
	I1216 11:58:33.842651  279095 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/apiserver.key.4b1f7a67
	I1216 11:58:33.842714  279095 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/proxy-client.key
	I1216 11:58:33.842887  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem (1338 bytes)
	W1216 11:58:33.842940  279095 certs.go:480] ignoring /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519_empty.pem, impossibly tiny 0 bytes
	I1216 11:58:33.842954  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 11:58:33.842995  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem (1078 bytes)
	I1216 11:58:33.843034  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem (1123 bytes)
	I1216 11:58:33.843080  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem (1679 bytes)
	I1216 11:58:33.843153  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:58:33.843887  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 11:58:33.888237  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 11:58:33.922983  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 11:58:33.947106  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 11:58:33.979827  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 11:58:34.006341  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 11:58:34.029912  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 11:58:34.052408  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 11:58:34.074341  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 11:58:34.096314  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem --> /usr/share/ca-certificates/217519.pem (1338 bytes)
	I1216 11:58:34.117813  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /usr/share/ca-certificates/2175192.pem (1708 bytes)
	I1216 11:58:34.139265  279095 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 11:58:34.154749  279095 ssh_runner.go:195] Run: openssl version
	I1216 11:58:34.160150  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 11:58:34.170031  279095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:58:34.174128  279095 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:58:34.174192  279095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:58:34.179382  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 11:58:34.189755  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/217519.pem && ln -fs /usr/share/ca-certificates/217519.pem /etc/ssl/certs/217519.pem"
	I1216 11:58:34.200079  279095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/217519.pem
	I1216 11:58:34.204422  279095 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 10:42 /usr/share/ca-certificates/217519.pem
	I1216 11:58:34.204483  279095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/217519.pem
	I1216 11:58:34.210007  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/217519.pem /etc/ssl/certs/51391683.0"
	I1216 11:58:34.219577  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2175192.pem && ln -fs /usr/share/ca-certificates/2175192.pem /etc/ssl/certs/2175192.pem"
	I1216 11:58:34.229612  279095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2175192.pem
	I1216 11:58:34.233804  279095 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 10:42 /usr/share/ca-certificates/2175192.pem
	I1216 11:58:34.233855  279095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2175192.pem
	I1216 11:58:34.239357  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2175192.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 11:58:34.249593  279095 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 11:58:34.253857  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 11:58:34.259667  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 11:58:34.265350  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 11:58:34.271063  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 11:58:34.276571  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 11:58:34.282052  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 11:58:34.287542  279095 kubeadm.go:392] StartCluster: {Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil
> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:58:34.287635  279095 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 11:58:34.287698  279095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:58:34.330701  279095 cri.go:89] found id: ""
	I1216 11:58:34.330766  279095 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 11:58:34.340500  279095 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 11:58:34.340523  279095 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 11:58:34.340563  279095 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 11:58:34.351292  279095 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 11:58:34.351877  279095 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-409154" does not appear in /home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:58:34.352074  279095 kubeconfig.go:62] /home/jenkins/minikube-integration/20107-210204/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-409154" cluster setting kubeconfig missing "newest-cni-409154" context setting]
	I1216 11:58:34.352501  279095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/kubeconfig: {Name:mk57b3cf865454cf7d68479ab0b998d52a5c91e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:34.353808  279095 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 11:58:34.363101  279095 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.202
	I1216 11:58:34.363144  279095 kubeadm.go:1160] stopping kube-system containers ...
	I1216 11:58:34.363157  279095 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 11:58:34.363210  279095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:58:34.397341  279095 cri.go:89] found id: ""
	I1216 11:58:34.397410  279095 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 11:58:34.412614  279095 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:58:34.421801  279095 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:58:34.421830  279095 kubeadm.go:157] found existing configuration files:
	
	I1216 11:58:34.421890  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:58:34.430246  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:58:34.430309  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:58:34.438808  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:58:34.447241  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:58:34.447315  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:58:34.456064  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:58:34.464112  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:58:34.464179  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:58:34.472719  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:58:34.481088  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:58:34.481162  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:58:34.489902  279095 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:58:34.499478  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:34.600562  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:33.167369  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:33.180007  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:33.180135  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:33.216102  276553 cri.go:89] found id: ""
	I1216 11:58:33.216139  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.216149  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:33.216156  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:33.216219  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:33.264290  276553 cri.go:89] found id: ""
	I1216 11:58:33.264331  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.264351  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:33.264360  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:33.264428  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:33.307400  276553 cri.go:89] found id: ""
	I1216 11:58:33.307440  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.307452  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:33.307461  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:33.307528  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:33.348555  276553 cri.go:89] found id: ""
	I1216 11:58:33.348597  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.348610  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:33.348619  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:33.348688  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:33.385255  276553 cri.go:89] found id: ""
	I1216 11:58:33.385286  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.385296  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:33.385303  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:33.385366  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:33.422656  276553 cri.go:89] found id: ""
	I1216 11:58:33.422701  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.422713  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:33.422722  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:33.422783  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:33.461547  276553 cri.go:89] found id: ""
	I1216 11:58:33.461582  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.461591  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:33.461601  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:33.461651  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:33.496893  276553 cri.go:89] found id: ""
	I1216 11:58:33.496935  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.496948  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:33.496987  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:33.497003  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:33.510577  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:33.510609  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:33.579037  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:33.579064  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:33.579080  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:33.657142  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:33.657178  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:33.703963  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:33.703993  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:36.255123  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:36.269198  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:36.269265  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:36.302149  276553 cri.go:89] found id: ""
	I1216 11:58:36.302189  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.302202  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:36.302210  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:36.302278  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:36.334332  276553 cri.go:89] found id: ""
	I1216 11:58:36.334367  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.334378  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:36.334386  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:36.334478  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:36.367219  276553 cri.go:89] found id: ""
	I1216 11:58:36.367251  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.367262  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:36.367271  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:36.367346  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:36.409111  276553 cri.go:89] found id: ""
	I1216 11:58:36.409142  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.409154  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:36.409162  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:36.409235  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:36.453572  276553 cri.go:89] found id: ""
	I1216 11:58:36.453612  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.453624  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:36.453639  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:36.453713  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:36.498382  276553 cri.go:89] found id: ""
	I1216 11:58:36.498420  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.498430  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:36.498445  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:36.498516  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:36.533177  276553 cri.go:89] found id: ""
	I1216 11:58:36.533213  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.533225  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:36.533234  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:36.533315  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:36.568180  276553 cri.go:89] found id: ""
	I1216 11:58:36.568219  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.568232  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:36.568247  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:36.568263  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:36.631684  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:36.631732  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:36.646177  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:36.646219  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:36.715265  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:36.715298  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:36.715360  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:36.795141  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:36.795187  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:35.572311  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:35.786524  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:35.872020  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:35.964712  279095 api_server.go:52] waiting for apiserver process to appear ...
	I1216 11:58:35.964813  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:36.465153  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:36.965020  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:37.465530  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:37.965157  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:38.465454  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:38.479784  279095 api_server.go:72] duration metric: took 2.515071544s to wait for apiserver process to appear ...
	I1216 11:58:38.479821  279095 api_server.go:88] waiting for apiserver healthz status ...
	I1216 11:58:38.479849  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.266917  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 11:58:40.266944  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 11:58:40.266957  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.277079  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 11:58:40.277107  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 11:58:40.480677  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.486236  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:58:40.486263  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:58:40.979982  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.987028  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:58:40.987054  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:58:41.480764  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:41.487009  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:58:41.487037  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:58:41.980637  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:41.985077  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1216 11:58:41.991955  279095 api_server.go:141] control plane version: v1.31.2
	I1216 11:58:41.991987  279095 api_server.go:131] duration metric: took 3.512159263s to wait for apiserver health ...
	I1216 11:58:41.991997  279095 cni.go:84] Creating CNI manager for ""
	I1216 11:58:41.992003  279095 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:58:41.993731  279095 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 11:58:41.994974  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 11:58:42.005415  279095 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 11:58:42.022839  279095 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 11:58:42.033438  279095 system_pods.go:59] 8 kube-system pods found
	I1216 11:58:42.033476  279095 system_pods.go:61] "coredns-7c65d6cfc9-cvvpl" [2962f6fc-40be-45f2-9096-aadecacdc0b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 11:58:42.033486  279095 system_pods.go:61] "etcd-newest-cni-409154" [de9adbc9-3837-4367-b60c-0a28b5675107] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 11:58:42.033499  279095 system_pods.go:61] "kube-apiserver-newest-cni-409154" [62b9084c-0af2-4592-824c-8769ed71635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 11:58:42.033508  279095 system_pods.go:61] "kube-controller-manager-newest-cni-409154" [bde6fb21-32c3-4da8-909b-20b8b64f6e36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 11:58:42.033521  279095 system_pods.go:61] "kube-proxy-nz6pz" [703e409a-1ced-4b28-babc-6b8d233b9fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 11:58:42.033534  279095 system_pods.go:61] "kube-scheduler-newest-cni-409154" [850ced2f-6dae-40ef-ab2a-c5c66c5cd177] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 11:58:42.033551  279095 system_pods.go:61] "metrics-server-6867b74b74-9tg4t" [ba1ea4d4-8b10-41c3-944a-831cee9abc82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 11:58:42.033563  279095 system_pods.go:61] "storage-provisioner" [b3f6ed1a-bb74-4a9f-81f4-8ee598480fd9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 11:58:42.033575  279095 system_pods.go:74] duration metric: took 10.70808ms to wait for pod list to return data ...
	I1216 11:58:42.033585  279095 node_conditions.go:102] verifying NodePressure condition ...
	I1216 11:58:42.036820  279095 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 11:58:42.036844  279095 node_conditions.go:123] node cpu capacity is 2
	I1216 11:58:42.036875  279095 node_conditions.go:105] duration metric: took 3.281402ms to run NodePressure ...
	I1216 11:58:42.036900  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:42.327663  279095 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 11:58:42.339587  279095 ops.go:34] apiserver oom_adj: -16
	I1216 11:58:42.339616  279095 kubeadm.go:597] duration metric: took 7.999086573s to restartPrimaryControlPlane
	I1216 11:58:42.339627  279095 kubeadm.go:394] duration metric: took 8.052090671s to StartCluster
	I1216 11:58:42.339674  279095 settings.go:142] acquiring lock: {Name:mk3e7a5ea729d05e36deedcd45fd7829ccafa72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:42.339767  279095 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:58:42.340896  279095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/kubeconfig: {Name:mk57b3cf865454cf7d68479ab0b998d52a5c91e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:42.341317  279095 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 11:58:42.341358  279095 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 11:58:42.341468  279095 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-409154"
	I1216 11:58:42.341493  279095 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-409154"
	W1216 11:58:42.341502  279095 addons.go:243] addon storage-provisioner should already be in state true
	I1216 11:58:42.341525  279095 addons.go:69] Setting default-storageclass=true in profile "newest-cni-409154"
	I1216 11:58:42.341546  279095 addons.go:69] Setting dashboard=true in profile "newest-cni-409154"
	I1216 11:58:42.341534  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.341554  279095 config.go:182] Loaded profile config "newest-cni-409154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:58:42.341562  279095 addons.go:69] Setting metrics-server=true in profile "newest-cni-409154"
	I1216 11:58:42.341602  279095 addons.go:234] Setting addon metrics-server=true in "newest-cni-409154"
	W1216 11:58:42.341613  279095 addons.go:243] addon metrics-server should already be in state true
	I1216 11:58:42.341550  279095 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-409154"
	I1216 11:58:42.341668  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.341562  279095 addons.go:234] Setting addon dashboard=true in "newest-cni-409154"
	W1216 11:58:42.341766  279095 addons.go:243] addon dashboard should already be in state true
	I1216 11:58:42.341812  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.342033  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342055  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342066  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342065  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342085  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342107  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342207  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342230  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342910  279095 out.go:177] * Verifying Kubernetes components...
	I1216 11:58:42.344377  279095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:58:42.358561  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34565
	I1216 11:58:42.359188  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.359817  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.359841  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.360254  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.360504  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.362469  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37203
	I1216 11:58:42.362503  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I1216 11:58:42.362558  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42579
	I1216 11:58:42.362857  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.363000  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.363324  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.363351  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.363627  279095 addons.go:234] Setting addon default-storageclass=true in "newest-cni-409154"
	W1216 11:58:42.363647  279095 addons.go:243] addon default-storageclass should already be in state true
	I1216 11:58:42.363681  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.363730  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.363865  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.363890  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.363979  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.364019  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.364264  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.364300  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.364330  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.364468  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.364811  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.364857  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.365039  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.365061  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.365659  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.366150  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.366193  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.379564  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I1216 11:58:42.383427  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
	I1216 11:58:42.384214  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43403
	I1216 11:58:42.389453  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.389476  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.389687  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.389977  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.389995  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.390001  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.390016  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.390284  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.390308  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.390402  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.390497  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.390711  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.390766  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.390961  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.390969  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.391003  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.392531  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.393754  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.394494  279095 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1216 11:58:42.395267  279095 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 11:58:42.396422  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 11:58:42.396441  279095 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 11:58:42.396457  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.397661  279095 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 11:58:42.398785  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 11:58:42.398802  279095 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 11:58:42.398822  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.399817  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.400305  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.400328  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.400531  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.400690  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.400848  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.401130  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.402248  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.402677  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.402705  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.402899  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.403091  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.403235  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.403367  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.409172  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I1216 11:58:42.410026  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.410606  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.410625  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.410698  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I1216 11:58:42.411056  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.411179  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.411268  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.411636  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.411653  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.412245  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.412420  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.413415  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.413723  279095 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 11:58:42.413739  279095 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 11:58:42.413757  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.414236  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.415933  279095 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:58:39.333144  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:39.345528  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:39.345605  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:39.380984  276553 cri.go:89] found id: ""
	I1216 11:58:39.381022  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.381042  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:39.381050  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:39.381116  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:39.414143  276553 cri.go:89] found id: ""
	I1216 11:58:39.414179  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.414192  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:39.414200  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:39.414271  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:39.451080  276553 cri.go:89] found id: ""
	I1216 11:58:39.451113  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.451124  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:39.451133  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:39.451194  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:39.486555  276553 cri.go:89] found id: ""
	I1216 11:58:39.486585  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.486593  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:39.486599  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:39.486653  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:39.519626  276553 cri.go:89] found id: ""
	I1216 11:58:39.519663  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.519676  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:39.519683  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:39.519747  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:39.551678  276553 cri.go:89] found id: ""
	I1216 11:58:39.551717  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.551729  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:39.551736  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:39.551793  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:39.585498  276553 cri.go:89] found id: ""
	I1216 11:58:39.585536  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.585548  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:39.585556  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:39.585634  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:39.619904  276553 cri.go:89] found id: ""
	I1216 11:58:39.619941  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.619952  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:39.619967  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:39.619989  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:39.698641  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:39.698673  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:39.698690  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:39.790153  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:39.790199  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:39.836401  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:39.836438  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:39.887171  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:39.887217  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:42.400773  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:42.424070  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:42.424127  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:42.467053  276553 cri.go:89] found id: ""
	I1216 11:58:42.467092  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.467103  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:42.467110  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:42.467171  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:42.510214  276553 cri.go:89] found id: ""
	I1216 11:58:42.510248  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.510260  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:42.510268  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:42.510328  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:42.553938  276553 cri.go:89] found id: ""
	I1216 11:58:42.553974  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.553986  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:42.553994  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:42.554058  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:42.595174  276553 cri.go:89] found id: ""
	I1216 11:58:42.595208  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.595220  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:42.595228  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:42.595293  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:42.631184  276553 cri.go:89] found id: ""
	I1216 11:58:42.631219  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.631231  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:42.631240  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:42.631300  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:42.665302  276553 cri.go:89] found id: ""
	I1216 11:58:42.665328  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.665338  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:42.665346  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:42.665396  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:42.702222  276553 cri.go:89] found id: ""
	I1216 11:58:42.702249  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.702257  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:42.702263  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:42.702311  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:42.735627  276553 cri.go:89] found id: ""
	I1216 11:58:42.735658  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.735667  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:42.735676  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:42.735688  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:42.786111  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:42.786144  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:42.803378  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:42.803413  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:42.882160  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:42.882190  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:42.882207  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:42.969671  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:42.969707  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:42.416975  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.417169  279095 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:58:42.417184  279095 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 11:58:42.417201  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.417684  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.417713  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.417898  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.418090  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.418227  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.418322  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.420259  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.420651  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.420679  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.420818  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.420977  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.421115  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.421227  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.598988  279095 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:58:42.619954  279095 api_server.go:52] waiting for apiserver process to appear ...
	I1216 11:58:42.620059  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:42.638426  279095 api_server.go:72] duration metric: took 297.04949ms to wait for apiserver process to appear ...
	I1216 11:58:42.638459  279095 api_server.go:88] waiting for apiserver healthz status ...
	I1216 11:58:42.638487  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:42.645697  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1216 11:58:42.647451  279095 api_server.go:141] control plane version: v1.31.2
	I1216 11:58:42.647484  279095 api_server.go:131] duration metric: took 9.015381ms to wait for apiserver health ...
	I1216 11:58:42.647495  279095 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 11:58:42.653389  279095 system_pods.go:59] 8 kube-system pods found
	I1216 11:58:42.653419  279095 system_pods.go:61] "coredns-7c65d6cfc9-cvvpl" [2962f6fc-40be-45f2-9096-aadecacdc0b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 11:58:42.653427  279095 system_pods.go:61] "etcd-newest-cni-409154" [de9adbc9-3837-4367-b60c-0a28b5675107] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 11:58:42.653437  279095 system_pods.go:61] "kube-apiserver-newest-cni-409154" [62b9084c-0af2-4592-824c-8769ed71635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 11:58:42.653443  279095 system_pods.go:61] "kube-controller-manager-newest-cni-409154" [bde6fb21-32c3-4da8-909b-20b8b64f6e36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 11:58:42.653447  279095 system_pods.go:61] "kube-proxy-nz6pz" [703e409a-1ced-4b28-babc-6b8d233b9fcd] Running
	I1216 11:58:42.653452  279095 system_pods.go:61] "kube-scheduler-newest-cni-409154" [850ced2f-6dae-40ef-ab2a-c5c66c5cd177] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 11:58:42.653458  279095 system_pods.go:61] "metrics-server-6867b74b74-9tg4t" [ba1ea4d4-8b10-41c3-944a-831cee9abc82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 11:58:42.653464  279095 system_pods.go:61] "storage-provisioner" [b3f6ed1a-bb74-4a9f-81f4-8ee598480fd9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 11:58:42.653473  279095 system_pods.go:74] duration metric: took 5.971424ms to wait for pod list to return data ...
	I1216 11:58:42.653482  279095 default_sa.go:34] waiting for default service account to be created ...
	I1216 11:58:42.656290  279095 default_sa.go:45] found service account: "default"
	I1216 11:58:42.656311  279095 default_sa.go:55] duration metric: took 2.821034ms for default service account to be created ...
	I1216 11:58:42.656325  279095 kubeadm.go:582] duration metric: took 314.954393ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 11:58:42.656346  279095 node_conditions.go:102] verifying NodePressure condition ...
	I1216 11:58:42.659184  279095 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 11:58:42.659211  279095 node_conditions.go:123] node cpu capacity is 2
	I1216 11:58:42.659224  279095 node_conditions.go:105] duration metric: took 2.872931ms to run NodePressure ...
	I1216 11:58:42.659239  279095 start.go:241] waiting for startup goroutines ...
	I1216 11:58:42.718023  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 11:58:42.718054  279095 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 11:58:42.720098  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 11:58:42.720117  279095 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 11:58:42.761050  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:58:42.762948  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 11:58:42.772260  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 11:58:42.772281  279095 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 11:58:42.776710  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 11:58:42.776742  279095 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 11:58:42.815042  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 11:58:42.815075  279095 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 11:58:42.847205  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 11:58:42.847233  279095 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 11:58:42.858645  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 11:58:42.858702  279095 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 11:58:42.880891  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 11:58:42.880928  279095 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 11:58:42.901442  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 11:58:42.952713  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 11:58:42.952751  279095 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 11:58:43.107941  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 11:58:43.107984  279095 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 11:58:43.130360  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 11:58:43.130386  279095 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 11:58:43.190120  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 11:58:43.190147  279095 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 11:58:43.217576  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 11:58:44.705014  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.942029783s)
	I1216 11:58:44.705086  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705103  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705109  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.803618622s)
	I1216 11:58:44.705121  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.944038036s)
	I1216 11:58:44.705147  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705162  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705162  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705211  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705465  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.705510  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705518  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705534  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705548  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705644  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705658  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705696  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705708  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705716  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705728  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705739  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705717  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705733  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.705955  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705971  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.706021  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.706032  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.707564  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.707608  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.707647  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.707659  279095 addons.go:475] Verifying addon metrics-server=true in "newest-cni-409154"
	I1216 11:58:44.733968  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.733996  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.734329  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.734355  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.734356  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.986437  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.76879955s)
	I1216 11:58:44.986505  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.986524  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.986925  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.986948  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.986958  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.986966  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.987212  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.987234  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.988962  279095 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-409154 addons enable metrics-server
	
	I1216 11:58:44.990322  279095 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1216 11:58:44.991523  279095 addons.go:510] duration metric: took 2.650165363s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1216 11:58:44.991578  279095 start.go:246] waiting for cluster config update ...
	I1216 11:58:44.991599  279095 start.go:255] writing updated cluster config ...
	I1216 11:58:44.991876  279095 ssh_runner.go:195] Run: rm -f paused
	I1216 11:58:45.051986  279095 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1216 11:58:45.053871  279095 out.go:177] * Done! kubectl is now configured to use "newest-cni-409154" cluster and "default" namespace by default
	I1216 11:58:45.512113  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:45.529025  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:45.529084  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:45.563665  276553 cri.go:89] found id: ""
	I1216 11:58:45.563697  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.563708  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:45.563717  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:45.563776  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:45.596079  276553 cri.go:89] found id: ""
	I1216 11:58:45.596119  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.596132  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:45.596140  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:45.596202  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:45.629014  276553 cri.go:89] found id: ""
	I1216 11:58:45.629042  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.629055  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:45.629062  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:45.629128  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:45.671688  276553 cri.go:89] found id: ""
	I1216 11:58:45.671714  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.671725  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:45.671733  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:45.671788  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:45.711944  276553 cri.go:89] found id: ""
	I1216 11:58:45.711977  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.711987  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:45.711994  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:45.712046  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:45.752121  276553 cri.go:89] found id: ""
	I1216 11:58:45.752155  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.752164  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:45.752170  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:45.752230  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:45.785470  276553 cri.go:89] found id: ""
	I1216 11:58:45.785499  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.785510  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:45.785518  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:45.785576  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:45.819346  276553 cri.go:89] found id: ""
	I1216 11:58:45.819374  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.819387  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:45.819399  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:45.819414  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:45.855153  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:45.855199  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:45.906709  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:45.906745  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:45.919757  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:45.919788  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:45.984752  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:45.984779  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:45.984798  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:48.559896  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:48.572393  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:48.572475  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:48.603458  276553 cri.go:89] found id: ""
	I1216 11:58:48.603496  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.603508  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:48.603516  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:48.603582  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:48.639883  276553 cri.go:89] found id: ""
	I1216 11:58:48.639920  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.639931  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:48.639938  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:48.640065  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:48.671045  276553 cri.go:89] found id: ""
	I1216 11:58:48.671070  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.671079  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:48.671085  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:48.671152  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:48.703295  276553 cri.go:89] found id: ""
	I1216 11:58:48.703341  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.703351  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:48.703360  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:48.703428  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:48.736411  276553 cri.go:89] found id: ""
	I1216 11:58:48.736442  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.736451  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:48.736457  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:48.736514  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:48.767332  276553 cri.go:89] found id: ""
	I1216 11:58:48.767375  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.767387  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:48.767396  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:48.767461  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:48.800080  276553 cri.go:89] found id: ""
	I1216 11:58:48.800112  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.800123  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:48.800131  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:48.800197  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:48.832760  276553 cri.go:89] found id: ""
	I1216 11:58:48.832802  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.832814  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:48.832826  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:48.832845  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:48.848815  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:48.848855  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:48.930771  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:48.930794  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:48.930808  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:49.005468  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:49.005511  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:49.040128  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:49.040166  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:51.591281  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:51.603590  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:51.603672  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:51.634226  276553 cri.go:89] found id: ""
	I1216 11:58:51.634255  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.634263  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:51.634270  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:51.634324  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:51.665685  276553 cri.go:89] found id: ""
	I1216 11:58:51.665718  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.665726  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:51.665732  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:51.665783  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:51.697159  276553 cri.go:89] found id: ""
	I1216 11:58:51.697192  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.697200  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:51.697206  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:51.697255  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:51.729513  276553 cri.go:89] found id: ""
	I1216 11:58:51.729543  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.729551  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:51.729556  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:51.729611  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:51.760525  276553 cri.go:89] found id: ""
	I1216 11:58:51.760559  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.760568  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:51.760574  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:51.760634  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:51.791787  276553 cri.go:89] found id: ""
	I1216 11:58:51.791824  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.791835  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:51.791844  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:51.791897  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:51.823131  276553 cri.go:89] found id: ""
	I1216 11:58:51.823166  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.823177  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:51.823186  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:51.823258  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:51.854638  276553 cri.go:89] found id: ""
	I1216 11:58:51.854675  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.854688  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:51.854699  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:51.854720  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:51.903207  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:51.903247  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:51.916182  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:51.916210  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:51.978879  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:51.978906  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:51.978918  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:52.054050  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:52.054087  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:54.592784  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:54.606444  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:54.606511  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:54.641053  276553 cri.go:89] found id: ""
	I1216 11:58:54.641094  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.641106  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:54.641114  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:54.641194  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:54.672984  276553 cri.go:89] found id: ""
	I1216 11:58:54.673018  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.673027  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:54.673032  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:54.673081  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:54.705118  276553 cri.go:89] found id: ""
	I1216 11:58:54.705144  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.705153  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:54.705159  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:54.705210  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:54.735744  276553 cri.go:89] found id: ""
	I1216 11:58:54.735778  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.735791  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:54.735798  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:54.735851  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:54.767983  276553 cri.go:89] found id: ""
	I1216 11:58:54.768012  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.768020  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:54.768027  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:54.768076  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:54.799412  276553 cri.go:89] found id: ""
	I1216 11:58:54.799440  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.799448  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:54.799455  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:54.799506  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:54.830329  276553 cri.go:89] found id: ""
	I1216 11:58:54.830357  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.830365  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:54.830371  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:54.830421  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:54.861544  276553 cri.go:89] found id: ""
	I1216 11:58:54.861573  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.861583  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:54.861593  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:54.861606  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:54.911522  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:54.911562  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:54.923947  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:54.923980  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:55.000816  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:55.000838  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:55.000854  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:55.072803  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:55.072845  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:57.608748  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:57.622071  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:57.622149  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:57.653826  276553 cri.go:89] found id: ""
	I1216 11:58:57.653863  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.653876  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:57.653885  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:57.653946  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:57.686809  276553 cri.go:89] found id: ""
	I1216 11:58:57.686839  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.686852  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:57.686860  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:57.686931  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:57.719565  276553 cri.go:89] found id: ""
	I1216 11:58:57.719601  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.719613  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:57.719622  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:57.719676  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:57.752279  276553 cri.go:89] found id: ""
	I1216 11:58:57.752318  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.752330  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:57.752339  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:57.752403  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:57.785915  276553 cri.go:89] found id: ""
	I1216 11:58:57.785949  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.785961  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:57.785969  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:57.786039  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:57.818703  276553 cri.go:89] found id: ""
	I1216 11:58:57.818734  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.818748  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:57.818754  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:57.818821  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:57.856323  276553 cri.go:89] found id: ""
	I1216 11:58:57.856362  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.856371  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:57.856377  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:57.856431  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:57.888461  276553 cri.go:89] found id: ""
	I1216 11:58:57.888507  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.888515  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:57.888526  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:57.888543  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:57.924744  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:57.924783  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:57.974915  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:57.974952  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:57.987702  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:57.987737  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:58.047740  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:58.047764  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:58.047779  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:59:00.624270  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:59:00.636790  276553 kubeadm.go:597] duration metric: took 4m2.920412851s to restartPrimaryControlPlane
	W1216 11:59:00.636868  276553 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 11:59:00.636890  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 11:59:01.078876  276553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:59:01.092675  276553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:59:01.102060  276553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:59:01.111330  276553 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:59:01.111353  276553 kubeadm.go:157] found existing configuration files:
	
	I1216 11:59:01.111396  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:59:01.120045  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:59:01.120110  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:59:01.128974  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:59:01.137554  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:59:01.137630  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:59:01.146493  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:59:01.154841  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:59:01.154904  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:59:01.163934  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:59:01.172584  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:59:01.172637  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:59:01.181391  276553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 11:59:01.369411  276553 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:00:57.257269  276553 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 12:00:57.257376  276553 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 12:00:57.258891  276553 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 12:00:57.258974  276553 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:00:57.259041  276553 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:00:57.259123  276553 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:00:57.259218  276553 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:00:57.259321  276553 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:00:57.262146  276553 out.go:235]   - Generating certificates and keys ...
	I1216 12:00:57.262267  276553 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:00:57.262347  276553 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:00:57.262465  276553 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:00:57.262571  276553 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:00:57.262667  276553 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:00:57.262717  276553 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:00:57.262791  276553 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:00:57.262860  276553 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:00:57.262924  276553 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:00:57.262996  276553 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:00:57.263030  276553 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:00:57.263084  276553 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:00:57.263135  276553 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:00:57.263181  276553 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:00:57.263235  276553 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:00:57.263281  276553 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:00:57.263373  276553 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:00:57.263445  276553 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:00:57.263481  276553 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:00:57.263542  276553 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:00:57.265255  276553 out.go:235]   - Booting up control plane ...
	I1216 12:00:57.265379  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:00:57.265453  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:00:57.265511  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:00:57.265629  276553 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:00:57.265768  276553 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:00:57.265811  276553 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 12:00:57.265917  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266078  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266159  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266350  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266437  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266649  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266712  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266895  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266973  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.267138  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.267150  276553 kubeadm.go:310] 
	I1216 12:00:57.267214  276553 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 12:00:57.267271  276553 kubeadm.go:310] 		timed out waiting for the condition
	I1216 12:00:57.267281  276553 kubeadm.go:310] 
	I1216 12:00:57.267334  276553 kubeadm.go:310] 	This error is likely caused by:
	I1216 12:00:57.267378  276553 kubeadm.go:310] 		- The kubelet is not running
	I1216 12:00:57.267488  276553 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 12:00:57.267499  276553 kubeadm.go:310] 
	I1216 12:00:57.267604  276553 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 12:00:57.267659  276553 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 12:00:57.267700  276553 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 12:00:57.267716  276553 kubeadm.go:310] 
	I1216 12:00:57.267867  276553 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 12:00:57.267965  276553 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 12:00:57.267976  276553 kubeadm.go:310] 
	I1216 12:00:57.268074  276553 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 12:00:57.268144  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 12:00:57.268210  276553 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 12:00:57.268279  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 12:00:57.268328  276553 kubeadm.go:310] 
	W1216 12:00:57.268428  276553 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 12:00:57.268489  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 12:00:57.717860  276553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 12:00:57.733963  276553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 12:00:57.744259  276553 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 12:00:57.744288  276553 kubeadm.go:157] found existing configuration files:
	
	I1216 12:00:57.744336  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 12:00:57.753893  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 12:00:57.753977  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 12:00:57.764071  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 12:00:57.773595  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 12:00:57.773682  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 12:00:57.783828  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 12:00:57.793769  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 12:00:57.793839  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 12:00:57.803766  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 12:00:57.813437  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 12:00:57.813513  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 12:00:57.823881  276553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 12:00:57.888749  276553 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 12:00:57.888835  276553 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:00:58.038785  276553 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:00:58.038916  276553 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:00:58.039088  276553 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:00:58.223884  276553 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:00:58.225611  276553 out.go:235]   - Generating certificates and keys ...
	I1216 12:00:58.225731  276553 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:00:58.225852  276553 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:00:58.225980  276553 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:00:58.226074  276553 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:00:58.226178  276553 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:00:58.226255  276553 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:00:58.226344  276553 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:00:58.226424  276553 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:00:58.226551  276553 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:00:58.226688  276553 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:00:58.226756  276553 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:00:58.226821  276553 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:00:58.353567  276553 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:00:58.694503  276553 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:00:58.792660  276553 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:00:59.086043  276553 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:00:59.108391  276553 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:00:59.108558  276553 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:00:59.108623  276553 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:00:59.247927  276553 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:00:59.249627  276553 out.go:235]   - Booting up control plane ...
	I1216 12:00:59.249774  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:00:59.251436  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:00:59.254163  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:00:59.257479  276553 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:00:59.261730  276553 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:01:39.263454  276553 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 12:01:39.263569  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:39.263847  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:01:44.264678  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:44.264927  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:01:54.265352  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:54.265639  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:14.265999  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:02:14.266235  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:54.265070  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:02:54.265312  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:54.265327  276553 kubeadm.go:310] 
	I1216 12:02:54.265385  276553 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 12:02:54.265445  276553 kubeadm.go:310] 		timed out waiting for the condition
	I1216 12:02:54.265455  276553 kubeadm.go:310] 
	I1216 12:02:54.265515  276553 kubeadm.go:310] 	This error is likely caused by:
	I1216 12:02:54.265563  276553 kubeadm.go:310] 		- The kubelet is not running
	I1216 12:02:54.265722  276553 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 12:02:54.265750  276553 kubeadm.go:310] 
	I1216 12:02:54.265890  276553 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 12:02:54.265936  276553 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 12:02:54.265973  276553 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 12:02:54.265995  276553 kubeadm.go:310] 
	I1216 12:02:54.266136  276553 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 12:02:54.266255  276553 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 12:02:54.266265  276553 kubeadm.go:310] 
	I1216 12:02:54.266405  276553 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 12:02:54.266530  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 12:02:54.266638  276553 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 12:02:54.266729  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 12:02:54.266748  276553 kubeadm.go:310] 
	I1216 12:02:54.267271  276553 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:02:54.267355  276553 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 12:02:54.267426  276553 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 12:02:54.267491  276553 kubeadm.go:394] duration metric: took 7m56.598620484s to StartCluster
	I1216 12:02:54.267542  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 12:02:54.267613  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 12:02:54.301812  276553 cri.go:89] found id: ""
	I1216 12:02:54.301847  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.301855  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 12:02:54.301863  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 12:02:54.301917  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 12:02:54.334730  276553 cri.go:89] found id: ""
	I1216 12:02:54.334768  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.334780  276553 logs.go:284] No container was found matching "etcd"
	I1216 12:02:54.334788  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 12:02:54.334853  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 12:02:54.366080  276553 cri.go:89] found id: ""
	I1216 12:02:54.366115  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.366128  276553 logs.go:284] No container was found matching "coredns"
	I1216 12:02:54.366136  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 12:02:54.366202  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 12:02:54.396447  276553 cri.go:89] found id: ""
	I1216 12:02:54.396483  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.396495  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 12:02:54.396503  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 12:02:54.396584  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 12:02:54.429291  276553 cri.go:89] found id: ""
	I1216 12:02:54.429326  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.429337  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 12:02:54.429345  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 12:02:54.429409  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 12:02:54.460235  276553 cri.go:89] found id: ""
	I1216 12:02:54.460268  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.460276  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 12:02:54.460283  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 12:02:54.460334  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 12:02:54.492739  276553 cri.go:89] found id: ""
	I1216 12:02:54.492771  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.492780  276553 logs.go:284] No container was found matching "kindnet"
	I1216 12:02:54.492787  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 12:02:54.492840  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 12:02:54.524322  276553 cri.go:89] found id: ""
	I1216 12:02:54.524358  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.524369  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 12:02:54.524384  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 12:02:54.524400  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:02:54.575979  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 12:02:54.576022  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:02:54.591148  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:02:54.591184  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 12:02:54.704231  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 12:02:54.704259  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 12:02:54.704277  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 12:02:54.804001  276553 logs.go:123] Gathering logs for container status ...
	I1216 12:02:54.804047  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 12:02:54.842021  276553 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 12:02:54.842097  276553 out.go:270] * 
	W1216 12:02:54.842173  276553 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 12:02:54.842192  276553 out.go:270] * 
	W1216 12:02:54.843372  276553 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:02:54.847542  276553 out.go:201] 
	W1216 12:02:54.848991  276553 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 12:02:54.849037  276553 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 12:02:54.849054  276553 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 12:02:54.850514  276553 out.go:201] 
	
	
	==> CRI-O <==
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.869676483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734350575869655189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8627dd6-034a-4167-891e-419bf633cb03 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.870331370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51dbe305-9d67-4962-9dfb-cf72d026bba1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.870403657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51dbe305-9d67-4962-9dfb-cf72d026bba1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.870468292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=51dbe305-9d67-4962-9dfb-cf72d026bba1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.904733502Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e677d6a-1f7e-46a5-978e-204c95bea3fe name=/runtime.v1.RuntimeService/Version
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.904846417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e677d6a-1f7e-46a5-978e-204c95bea3fe name=/runtime.v1.RuntimeService/Version
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.906143516Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba0ec9ce-3a27-4004-9069-19591a1d49c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.906657088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734350575906635589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba0ec9ce-3a27-4004-9069-19591a1d49c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.907192265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8b3dfc1-f32f-4303-9077-96d4222fd318 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.907269460Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8b3dfc1-f32f-4303-9077-96d4222fd318 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.907322031Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c8b3dfc1-f32f-4303-9077-96d4222fd318 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.939863290Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=714aa5a1-fc8d-4458-9cdc-561fcf21c4f7 name=/runtime.v1.RuntimeService/Version
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.940021681Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=714aa5a1-fc8d-4458-9cdc-561fcf21c4f7 name=/runtime.v1.RuntimeService/Version
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.941175042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4638675b-9c1a-485d-ba16-53b1f2bdb5c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.941635449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734350575941614857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4638675b-9c1a-485d-ba16-53b1f2bdb5c9 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.942311155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a30b2bc8-9109-4bab-a24f-872dd51023d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.942383833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a30b2bc8-9109-4bab-a24f-872dd51023d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.942434419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a30b2bc8-9109-4bab-a24f-872dd51023d4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.976032516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92ec7bdb-4460-4417-bf92-0ef3eb32bd4f name=/runtime.v1.RuntimeService/Version
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.976577018Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92ec7bdb-4460-4417-bf92-0ef3eb32bd4f name=/runtime.v1.RuntimeService/Version
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.983910826Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf284de5-2e9c-4256-a6c4-11625f0bd89f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.984559251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734350575984529532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf284de5-2e9c-4256-a6c4-11625f0bd89f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.985233792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6ef6338-2c32-4073-9c1e-54f4028c38dc name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.985323341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6ef6338-2c32-4073-9c1e-54f4028c38dc name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:02:55 old-k8s-version-933974 crio[633]: time="2024-12-16 12:02:55.985371033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e6ef6338-2c32-4073-9c1e-54f4028c38dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 11:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051890] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038428] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.993369] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.061080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.582390] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.514537] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.064570] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063777] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.185160] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.127891] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.247553] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.349244] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.059200] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.994875] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[Dec16 11:55] kauditd_printk_skb: 46 callbacks suppressed
	[Dec16 11:59] systemd-fstab-generator[5025]: Ignoring "noauto" option for root device
	[Dec16 12:00] systemd-fstab-generator[5310]: Ignoring "noauto" option for root device
	[  +0.068159] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:02:56 up 8 min,  0 users,  load average: 0.00, 0.07, 0.04
	Linux old-k8s-version-933974 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:108 +0x66
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.DefaultWatchErrorHandler(0xc0000d0380, 0x4f04d00, 0xc000b7a670)
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000b5e6f0)
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000975ef0, 0x4f0ac20, 0xc000101a90, 0x1, 0xc0001020c0)
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d0380, 0xc0001020c0)
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0005b3500, 0xc000b70300)
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 16 12:02:53 old-k8s-version-933974 kubelet[5491]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 16 12:02:54 old-k8s-version-933974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Dec 16 12:02:54 old-k8s-version-933974 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 16 12:02:54 old-k8s-version-933974 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 16 12:02:54 old-k8s-version-933974 kubelet[5541]: I1216 12:02:54.676609    5541 server.go:416] Version: v1.20.0
	Dec 16 12:02:54 old-k8s-version-933974 kubelet[5541]: I1216 12:02:54.676909    5541 server.go:837] Client rotation is on, will bootstrap in background
	Dec 16 12:02:54 old-k8s-version-933974 kubelet[5541]: I1216 12:02:54.678816    5541 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 16 12:02:54 old-k8s-version-933974 kubelet[5541]: I1216 12:02:54.679845    5541 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Dec 16 12:02:54 old-k8s-version-933974 kubelet[5541]: W1216 12:02:54.680002    5541 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-933974 -n old-k8s-version-933974
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 2 (233.701314ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-933974" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (508.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:03:21.265464  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/default-k8s-diff-port-935544/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:03:29.963418  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:03:48.521931  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:03:50.823374  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:04:52.224191  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:04:56.030300  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/no-preload-181484/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:05:23.731194  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/no-preload-181484/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:05:26.956024  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:05:37.405098  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/default-k8s-diff-port-935544/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:06:05.107502  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/default-k8s-diff-port-935544/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:06:06.844173  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:06:15.288527  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:06:35.653905  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:06:50.020613  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:06:51.595615  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:06:55.823922  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:07:12.186954  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:07:58.717413  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:08:18.886103  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:08:29.963397  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:08:35.250463  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:08:48.522101  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:08:50.822833  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:09:52.223661  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:09:53.024613  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:09:56.030719  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/no-preload-181484/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:10:13.888032  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:10:26.955401  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:10:37.405045  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/default-k8s-diff-port-935544/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:11:06.843825  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:11:35.653936  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:11:55.823940  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-933974 -n old-k8s-version-933974
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 2 (232.056532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-933974" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 2 (220.141408ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-933974 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-987169 image list                          | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| delete  | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| start   | -p newest-cni-409154 --memory=2200 --alsologtostderr   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-935544                           | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	| image   | no-preload-181484 image list                           | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| delete  | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| addons  | enable metrics-server -p newest-cni-409154             | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-409154                  | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-409154 --memory=2200 --alsologtostderr   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-409154 image list                           | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	| delete  | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:58:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:58:10.457214  279095 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:58:10.457320  279095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:58:10.457328  279095 out.go:358] Setting ErrFile to fd 2...
	I1216 11:58:10.457332  279095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:58:10.457523  279095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:58:10.458091  279095 out.go:352] Setting JSON to false
	I1216 11:58:10.459068  279095 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13237,"bootTime":1734337053,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 11:58:10.459136  279095 start.go:139] virtualization: kvm guest
	I1216 11:58:10.461398  279095 out.go:177] * [newest-cni-409154] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 11:58:10.462722  279095 notify.go:220] Checking for updates...
	I1216 11:58:10.462776  279095 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:58:10.464205  279095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:58:10.465623  279095 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:58:10.466987  279095 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:58:10.468240  279095 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 11:58:10.469465  279095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:58:10.470955  279095 config.go:182] Loaded profile config "newest-cni-409154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:58:10.471351  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:10.471415  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:10.486592  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
	I1216 11:58:10.487085  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:10.487663  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:10.487693  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:10.488179  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:10.488439  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:10.488761  279095 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:58:10.489224  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:10.489296  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:10.505146  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I1216 11:58:10.505678  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:10.506233  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:10.506264  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:10.506714  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:10.506902  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:10.544395  279095 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 11:58:10.545779  279095 start.go:297] selected driver: kvm2
	I1216 11:58:10.545792  279095 start.go:901] validating driver "kvm2" against &{Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:58:10.545905  279095 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:58:10.546668  279095 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:58:10.546758  279095 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 11:58:10.563076  279095 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 11:58:10.563675  279095 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 11:58:10.563714  279095 cni.go:84] Creating CNI manager for ""
	I1216 11:58:10.563781  279095 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:58:10.563837  279095 start.go:340] cluster config:
	{Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-409154 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeR
equested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:58:10.564033  279095 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:58:10.565811  279095 out.go:177] * Starting "newest-cni-409154" primary control-plane node in "newest-cni-409154" cluster
	I1216 11:58:10.567051  279095 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:58:10.567086  279095 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1216 11:58:10.567099  279095 cache.go:56] Caching tarball of preloaded images
	I1216 11:58:10.567176  279095 preload.go:172] Found /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 11:58:10.567186  279095 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1216 11:58:10.567281  279095 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/config.json ...
	I1216 11:58:10.567464  279095 start.go:360] acquireMachinesLock for newest-cni-409154: {Name:mk416695f38f04563a2ca155c5383ee4d8f0f97b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 11:58:10.567508  279095 start.go:364] duration metric: took 24.753µs to acquireMachinesLock for "newest-cni-409154"
	I1216 11:58:10.567522  279095 start.go:96] Skipping create...Using existing machine configuration
	I1216 11:58:10.567530  279095 fix.go:54] fixHost starting: 
	I1216 11:58:10.567819  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:10.567855  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:10.582641  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46159
	I1216 11:58:10.583122  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:10.583779  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:10.583807  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:10.584109  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:10.584302  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:10.584447  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:10.585895  279095 fix.go:112] recreateIfNeeded on newest-cni-409154: state=Stopped err=<nil>
	I1216 11:58:10.585928  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	W1216 11:58:10.586110  279095 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 11:58:10.587967  279095 out.go:177] * Restarting existing kvm2 VM for "newest-cni-409154" ...
	I1216 11:58:08.692849  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:08.705140  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:08.705206  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:08.745953  276553 cri.go:89] found id: ""
	I1216 11:58:08.745985  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.745994  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:08.746001  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:08.746053  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:08.777650  276553 cri.go:89] found id: ""
	I1216 11:58:08.777678  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.777686  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:08.777692  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:08.777753  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:08.810501  276553 cri.go:89] found id: ""
	I1216 11:58:08.810530  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.810541  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:08.810547  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:08.810602  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:08.843082  276553 cri.go:89] found id: ""
	I1216 11:58:08.843111  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.843120  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:08.843126  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:08.843175  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:08.875195  276553 cri.go:89] found id: ""
	I1216 11:58:08.875223  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.875232  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:08.875238  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:08.875308  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:08.907296  276553 cri.go:89] found id: ""
	I1216 11:58:08.907334  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.907346  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:08.907354  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:08.907409  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:08.939491  276553 cri.go:89] found id: ""
	I1216 11:58:08.939525  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.939537  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:08.939544  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:08.939607  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:08.970370  276553 cri.go:89] found id: ""
	I1216 11:58:08.970407  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.970420  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:08.970434  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:08.970452  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:08.983347  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:08.983393  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:09.057735  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:09.057765  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:09.057784  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:09.136549  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:09.136588  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:09.186771  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:09.186811  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:11.756641  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:11.776517  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:11.776588  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:11.813876  276553 cri.go:89] found id: ""
	I1216 11:58:11.813912  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.813925  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:11.813933  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:11.814000  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:11.850775  276553 cri.go:89] found id: ""
	I1216 11:58:11.850813  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.850825  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:11.850835  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:11.850894  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:11.881886  276553 cri.go:89] found id: ""
	I1216 11:58:11.881920  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.881933  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:11.881942  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:11.882008  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:11.913165  276553 cri.go:89] found id: ""
	I1216 11:58:11.913196  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.913209  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:11.913217  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:11.913279  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:11.945192  276553 cri.go:89] found id: ""
	I1216 11:58:11.945220  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.945231  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:11.945239  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:11.945297  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:11.977631  276553 cri.go:89] found id: ""
	I1216 11:58:11.977661  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.977673  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:11.977682  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:11.977755  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:12.009497  276553 cri.go:89] found id: ""
	I1216 11:58:12.009527  276553 logs.go:282] 0 containers: []
	W1216 11:58:12.009536  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:12.009546  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:12.009610  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:12.045501  276553 cri.go:89] found id: ""
	I1216 11:58:12.045524  276553 logs.go:282] 0 containers: []
	W1216 11:58:12.045534  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:12.045547  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:12.045564  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:12.114030  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:12.114057  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:12.114073  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:12.188314  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:12.188356  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:12.224600  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:12.224632  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:12.277641  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:12.277681  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:10.589206  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Start
	I1216 11:58:10.589378  279095 main.go:141] libmachine: (newest-cni-409154) starting domain...
	I1216 11:58:10.589402  279095 main.go:141] libmachine: (newest-cni-409154) ensuring networks are active...
	I1216 11:58:10.590045  279095 main.go:141] libmachine: (newest-cni-409154) Ensuring network default is active
	I1216 11:58:10.590345  279095 main.go:141] libmachine: (newest-cni-409154) Ensuring network mk-newest-cni-409154 is active
	I1216 11:58:10.590691  279095 main.go:141] libmachine: (newest-cni-409154) getting domain XML...
	I1216 11:58:10.591328  279095 main.go:141] libmachine: (newest-cni-409154) creating domain...
	I1216 11:58:11.793966  279095 main.go:141] libmachine: (newest-cni-409154) waiting for IP...
	I1216 11:58:11.795095  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:11.795603  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:11.795695  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:11.795591  279132 retry.go:31] will retry after 244.170622ms: waiting for domain to come up
	I1216 11:58:12.041392  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:12.042035  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:12.042065  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:12.042003  279132 retry.go:31] will retry after 378.076417ms: waiting for domain to come up
	I1216 11:58:12.421749  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:12.422240  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:12.422267  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:12.422216  279132 retry.go:31] will retry after 370.938245ms: waiting for domain to come up
	I1216 11:58:12.794930  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:12.795410  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:12.795430  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:12.795372  279132 retry.go:31] will retry after 380.56228ms: waiting for domain to come up
	I1216 11:58:13.177977  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:13.178564  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:13.178597  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:13.178500  279132 retry.go:31] will retry after 582.330697ms: waiting for domain to come up
	I1216 11:58:13.762033  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:13.762664  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:13.762701  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:13.762593  279132 retry.go:31] will retry after 600.533428ms: waiting for domain to come up
	I1216 11:58:14.364374  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:14.364791  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:14.364828  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:14.364752  279132 retry.go:31] will retry after 773.596823ms: waiting for domain to come up
	I1216 11:58:15.139784  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:15.140270  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:15.140300  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:15.140224  279132 retry.go:31] will retry after 1.264403571s: waiting for domain to come up
	I1216 11:58:14.791934  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:14.805168  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:14.805255  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:14.837804  276553 cri.go:89] found id: ""
	I1216 11:58:14.837834  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.837898  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:14.837911  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:14.837976  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:14.871140  276553 cri.go:89] found id: ""
	I1216 11:58:14.871171  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.871183  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:14.871191  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:14.871254  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:14.903081  276553 cri.go:89] found id: ""
	I1216 11:58:14.903118  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.903127  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:14.903133  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:14.903196  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:14.942599  276553 cri.go:89] found id: ""
	I1216 11:58:14.942637  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.942650  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:14.942658  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:14.942723  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:14.981765  276553 cri.go:89] found id: ""
	I1216 11:58:14.981797  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.981809  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:14.981816  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:14.981878  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:15.020936  276553 cri.go:89] found id: ""
	I1216 11:58:15.020977  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.020987  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:15.020993  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:15.021052  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:15.053954  276553 cri.go:89] found id: ""
	I1216 11:58:15.053995  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.054008  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:15.054016  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:15.054081  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:15.088792  276553 cri.go:89] found id: ""
	I1216 11:58:15.088828  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.088839  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:15.088852  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:15.088867  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:15.143836  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:15.143873  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:15.162594  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:15.162637  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:15.252534  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:15.252562  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:15.252578  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:15.337849  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:15.337892  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:17.880680  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:17.893716  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:17.893807  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:17.928342  276553 cri.go:89] found id: ""
	I1216 11:58:17.928379  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.928394  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:17.928402  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:17.928468  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:17.964564  276553 cri.go:89] found id: ""
	I1216 11:58:17.964609  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.964618  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:17.964624  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:17.964677  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:16.406244  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:16.406755  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:16.406782  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:16.406707  279132 retry.go:31] will retry after 1.148140994s: waiting for domain to come up
	I1216 11:58:17.557073  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:17.557603  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:17.557625  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:17.557562  279132 retry.go:31] will retry after 1.49928484s: waiting for domain to come up
	I1216 11:58:19.058022  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:19.058469  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:19.058493  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:19.058429  279132 retry.go:31] will retry after 1.785857688s: waiting for domain to come up
	I1216 11:58:17.999903  276553 cri.go:89] found id: ""
	I1216 11:58:17.999937  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.999946  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:17.999952  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:18.000011  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:18.042198  276553 cri.go:89] found id: ""
	I1216 11:58:18.042230  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.042243  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:18.042250  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:18.042314  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:18.078020  276553 cri.go:89] found id: ""
	I1216 11:58:18.078056  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.078070  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:18.078080  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:18.078154  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:18.111353  276553 cri.go:89] found id: ""
	I1216 11:58:18.111392  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.111404  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:18.111412  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:18.111485  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:18.147126  276553 cri.go:89] found id: ""
	I1216 11:58:18.147161  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.147172  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:18.147178  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:18.147245  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:18.181924  276553 cri.go:89] found id: ""
	I1216 11:58:18.181962  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.181974  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:18.181989  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:18.182007  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:18.235545  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:18.235588  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:18.251579  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:18.251610  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:18.316207  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:18.316238  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:18.316255  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:18.389630  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:18.389677  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:20.929592  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:20.944290  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:20.944382  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:20.991069  276553 cri.go:89] found id: ""
	I1216 11:58:20.991107  276553 logs.go:282] 0 containers: []
	W1216 11:58:20.991118  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:20.991126  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:20.991191  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:21.033257  276553 cri.go:89] found id: ""
	I1216 11:58:21.033291  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.033304  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:21.033311  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:21.033397  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:21.068318  276553 cri.go:89] found id: ""
	I1216 11:58:21.068357  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.068370  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:21.068378  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:21.068449  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:21.100812  276553 cri.go:89] found id: ""
	I1216 11:58:21.100847  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.100860  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:21.100867  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:21.100943  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:21.136004  276553 cri.go:89] found id: ""
	I1216 11:58:21.136037  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.136048  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:21.136054  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:21.136121  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:21.172785  276553 cri.go:89] found id: ""
	I1216 11:58:21.172825  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.172836  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:21.172842  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:21.172907  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:21.207325  276553 cri.go:89] found id: ""
	I1216 11:58:21.207381  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.207402  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:21.207413  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:21.207480  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:21.242438  276553 cri.go:89] found id: ""
	I1216 11:58:21.242479  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.242493  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:21.242508  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:21.242526  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:21.283025  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:21.283069  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:21.335930  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:21.335979  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:21.349370  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:21.349403  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:21.427874  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:21.427914  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:21.427932  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:20.846031  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:20.846581  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:20.846631  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:20.846572  279132 retry.go:31] will retry after 2.9103898s: waiting for domain to come up
	I1216 11:58:23.760767  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:23.761253  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:23.761287  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:23.761188  279132 retry.go:31] will retry after 3.698063043s: waiting for domain to come up
	I1216 11:58:24.015947  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:24.028721  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:24.028787  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:24.061707  276553 cri.go:89] found id: ""
	I1216 11:58:24.061736  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.061745  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:24.061751  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:24.061803  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:24.095657  276553 cri.go:89] found id: ""
	I1216 11:58:24.095687  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.095696  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:24.095702  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:24.095752  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:24.128755  276553 cri.go:89] found id: ""
	I1216 11:58:24.128784  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.128793  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:24.128799  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:24.128847  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:24.162145  276553 cri.go:89] found id: ""
	I1216 11:58:24.162180  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.162189  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:24.162194  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:24.162248  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:24.194650  276553 cri.go:89] found id: ""
	I1216 11:58:24.194689  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.194702  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:24.194709  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:24.194784  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:24.226091  276553 cri.go:89] found id: ""
	I1216 11:58:24.226127  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.226139  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:24.226147  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:24.226207  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:24.258140  276553 cri.go:89] found id: ""
	I1216 11:58:24.258184  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.258194  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:24.258200  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:24.258254  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:24.289916  276553 cri.go:89] found id: ""
	I1216 11:58:24.289948  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.289957  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:24.289969  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:24.289982  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:24.338070  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:24.338118  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:24.351201  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:24.351242  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:24.422998  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:24.423027  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:24.423039  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:24.499059  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:24.499113  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:27.036987  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:27.049417  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:27.049505  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:27.080923  276553 cri.go:89] found id: ""
	I1216 11:58:27.080951  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.080971  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:27.080980  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:27.081037  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:27.111686  276553 cri.go:89] found id: ""
	I1216 11:58:27.111717  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.111725  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:27.111731  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:27.111781  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:27.142935  276553 cri.go:89] found id: ""
	I1216 11:58:27.142966  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.142976  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:27.142984  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:27.143048  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:27.176277  276553 cri.go:89] found id: ""
	I1216 11:58:27.176309  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.176320  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:27.176326  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:27.176399  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:27.206698  276553 cri.go:89] found id: ""
	I1216 11:58:27.206733  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.206744  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:27.206752  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:27.206816  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:27.238188  276553 cri.go:89] found id: ""
	I1216 11:58:27.238225  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.238245  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:27.238253  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:27.238319  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:27.269646  276553 cri.go:89] found id: ""
	I1216 11:58:27.269678  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.269690  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:27.269697  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:27.269764  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:27.304992  276553 cri.go:89] found id: ""
	I1216 11:58:27.305022  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.305032  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:27.305042  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:27.305057  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:27.379755  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:27.379798  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:27.415958  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:27.415998  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:27.468345  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:27.468378  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:27.482879  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:27.482910  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:27.551153  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:27.461758  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.462297  279095 main.go:141] libmachine: (newest-cni-409154) found domain IP: 192.168.39.202
	I1216 11:58:27.462330  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has current primary IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.462345  279095 main.go:141] libmachine: (newest-cni-409154) reserving static IP address...
	I1216 11:58:27.462706  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "newest-cni-409154", mac: "52:54:00:23:fb:52", ip: "192.168.39.202"} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.462733  279095 main.go:141] libmachine: (newest-cni-409154) DBG | skip adding static IP to network mk-newest-cni-409154 - found existing host DHCP lease matching {name: "newest-cni-409154", mac: "52:54:00:23:fb:52", ip: "192.168.39.202"}
	I1216 11:58:27.462751  279095 main.go:141] libmachine: (newest-cni-409154) reserved static IP address 192.168.39.202 for domain newest-cni-409154
	I1216 11:58:27.462761  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Getting to WaitForSSH function...
	I1216 11:58:27.462769  279095 main.go:141] libmachine: (newest-cni-409154) waiting for SSH...
	I1216 11:58:27.464970  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.465299  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.465323  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.465446  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Using SSH client type: external
	I1216 11:58:27.465486  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Using SSH private key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa (-rw-------)
	I1216 11:58:27.465535  279095 main.go:141] libmachine: (newest-cni-409154) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 11:58:27.465568  279095 main.go:141] libmachine: (newest-cni-409154) DBG | About to run SSH command:
	I1216 11:58:27.465586  279095 main.go:141] libmachine: (newest-cni-409154) DBG | exit 0
	I1216 11:58:27.589004  279095 main.go:141] libmachine: (newest-cni-409154) DBG | SSH cmd err, output: <nil>: 
	I1216 11:58:27.589479  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetConfigRaw
	I1216 11:58:27.590146  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:27.592843  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.593292  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.593326  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.593571  279095 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/config.json ...
	I1216 11:58:27.593797  279095 machine.go:93] provisionDockerMachine start ...
	I1216 11:58:27.593817  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:27.594055  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.597195  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.597567  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.597598  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.597715  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:27.597907  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.598105  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.598253  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:27.598462  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:27.598720  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:27.598734  279095 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 11:58:27.697242  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 11:58:27.697284  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetMachineName
	I1216 11:58:27.697579  279095 buildroot.go:166] provisioning hostname "newest-cni-409154"
	I1216 11:58:27.697618  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetMachineName
	I1216 11:58:27.697818  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.700788  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.701199  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.701231  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.701465  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:27.701659  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.701868  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.701996  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:27.702154  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:27.702385  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:27.702412  279095 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-409154 && echo "newest-cni-409154" | sudo tee /etc/hostname
	I1216 11:58:27.810794  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-409154
	
	I1216 11:58:27.810827  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.813678  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.814176  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.814219  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.814350  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:27.814559  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.814706  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.814856  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:27.815025  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:27.815211  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:27.815227  279095 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-409154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-409154/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-409154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 11:58:27.921763  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:58:27.921799  279095 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20107-210204/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-210204/.minikube}
	I1216 11:58:27.921855  279095 buildroot.go:174] setting up certificates
	I1216 11:58:27.921869  279095 provision.go:84] configureAuth start
	I1216 11:58:27.921885  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetMachineName
	I1216 11:58:27.922180  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:27.924925  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.925273  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.925305  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.925452  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.927662  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.927976  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.928006  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.928167  279095 provision.go:143] copyHostCerts
	I1216 11:58:27.928234  279095 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem, removing ...
	I1216 11:58:27.928247  279095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem
	I1216 11:58:27.928329  279095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem (1078 bytes)
	I1216 11:58:27.928444  279095 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem, removing ...
	I1216 11:58:27.928456  279095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem
	I1216 11:58:27.928491  279095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem (1123 bytes)
	I1216 11:58:27.928836  279095 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem, removing ...
	I1216 11:58:27.928995  279095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem
	I1216 11:58:27.929057  279095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem (1679 bytes)
	I1216 11:58:27.929198  279095 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem org=jenkins.newest-cni-409154 san=[127.0.0.1 192.168.39.202 localhost minikube newest-cni-409154]
	I1216 11:58:28.119927  279095 provision.go:177] copyRemoteCerts
	I1216 11:58:28.119993  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 11:58:28.120033  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.122642  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.122863  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.122888  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.123099  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.123312  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.123510  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.123639  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.203158  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 11:58:28.230017  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 11:58:28.255874  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 11:58:28.281032  279095 provision.go:87] duration metric: took 359.143013ms to configureAuth
	I1216 11:58:28.281064  279095 buildroot.go:189] setting minikube options for container-runtime
	I1216 11:58:28.281272  279095 config.go:182] Loaded profile config "newest-cni-409154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:58:28.281381  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.283867  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.284173  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.284205  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.284362  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.284586  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.284761  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.284907  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.285075  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:28.285289  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:28.285311  279095 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 11:58:28.493363  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 11:58:28.493395  279095 machine.go:96] duration metric: took 899.585204ms to provisionDockerMachine
	I1216 11:58:28.493418  279095 start.go:293] postStartSetup for "newest-cni-409154" (driver="kvm2")
	I1216 11:58:28.493435  279095 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 11:58:28.493464  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.493804  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 11:58:28.493837  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.496887  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.497252  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.497305  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.497551  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.497781  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.497974  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.498122  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.575105  279095 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 11:58:28.579116  279095 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 11:58:28.579146  279095 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/addons for local assets ...
	I1216 11:58:28.579210  279095 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/files for local assets ...
	I1216 11:58:28.579283  279095 filesync.go:149] local asset: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem -> 2175192.pem in /etc/ssl/certs
	I1216 11:58:28.579384  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 11:58:28.588438  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:58:28.611018  279095 start.go:296] duration metric: took 117.581046ms for postStartSetup
	I1216 11:58:28.611076  279095 fix.go:56] duration metric: took 18.043540567s for fixHost
	I1216 11:58:28.611100  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.614398  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.614793  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.614826  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.615084  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.615326  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.615523  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.615719  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.615908  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:28.616090  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:28.616105  279095 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 11:58:28.717339  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734350308.689557050
	
	I1216 11:58:28.717371  279095 fix.go:216] guest clock: 1734350308.689557050
	I1216 11:58:28.717382  279095 fix.go:229] Guest: 2024-12-16 11:58:28.68955705 +0000 UTC Remote: 2024-12-16 11:58:28.611080616 +0000 UTC m=+18.193007687 (delta=78.476434ms)
	I1216 11:58:28.717413  279095 fix.go:200] guest clock delta is within tolerance: 78.476434ms
	I1216 11:58:28.717419  279095 start.go:83] releasing machines lock for "newest-cni-409154", held for 18.149901468s
	I1216 11:58:28.717440  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.717739  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:28.720755  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.721190  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.721220  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.721383  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.721877  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.722040  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.722130  279095 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 11:58:28.722179  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.722312  279095 ssh_runner.go:195] Run: cat /version.json
	I1216 11:58:28.722337  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.724752  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725087  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.725113  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725133  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725285  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.725472  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.725600  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.725623  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725634  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.725803  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.725790  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.725944  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.726118  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.726278  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.798361  279095 ssh_runner.go:195] Run: systemctl --version
	I1216 11:58:28.823281  279095 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 11:58:28.965469  279095 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 11:58:28.970957  279095 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 11:58:28.971032  279095 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:58:28.986070  279095 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 11:58:28.986095  279095 start.go:495] detecting cgroup driver to use...
	I1216 11:58:28.986168  279095 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 11:58:29.002166  279095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 11:58:29.015245  279095 docker.go:217] disabling cri-docker service (if available) ...
	I1216 11:58:29.015357  279095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 11:58:29.028270  279095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 11:58:29.040809  279095 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 11:58:29.153768  279095 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 11:58:29.296765  279095 docker.go:233] disabling docker service ...
	I1216 11:58:29.296853  279095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 11:58:29.310642  279095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 11:58:29.322968  279095 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 11:58:29.458651  279095 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 11:58:29.569319  279095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 11:58:29.583488  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 11:58:29.602278  279095 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 11:58:29.602346  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.612191  279095 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 11:58:29.612256  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.621862  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.631438  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.641222  279095 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 11:58:29.652611  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.663073  279095 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.679545  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.690214  279095 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 11:58:29.699851  279095 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 11:58:29.699926  279095 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 11:58:29.713189  279095 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 11:58:29.722840  279095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:58:29.848101  279095 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 11:58:29.935007  279095 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 11:58:29.935088  279095 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 11:58:29.939824  279095 start.go:563] Will wait 60s for crictl version
	I1216 11:58:29.939910  279095 ssh_runner.go:195] Run: which crictl
	I1216 11:58:29.943491  279095 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 11:58:29.980696  279095 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 11:58:29.980807  279095 ssh_runner.go:195] Run: crio --version
	I1216 11:58:30.009245  279095 ssh_runner.go:195] Run: crio --version
	I1216 11:58:30.038597  279095 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1216 11:58:30.040039  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:30.042931  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:30.043287  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:30.043320  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:30.043662  279095 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 11:58:30.047939  279095 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:58:30.062384  279095 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 11:58:30.063947  279095 kubeadm.go:883] updating cluster {Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 11:58:30.064099  279095 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:58:30.064174  279095 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:58:30.110756  279095 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1216 11:58:30.110842  279095 ssh_runner.go:195] Run: which lz4
	I1216 11:58:30.115974  279095 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 11:58:30.120455  279095 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 11:58:30.120505  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1216 11:58:30.052180  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:30.065848  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:30.065910  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:30.108387  276553 cri.go:89] found id: ""
	I1216 11:58:30.108418  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.108428  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:30.108436  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:30.108510  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:30.143956  276553 cri.go:89] found id: ""
	I1216 11:58:30.143997  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.144008  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:30.144014  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:30.144079  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:30.177213  276553 cri.go:89] found id: ""
	I1216 11:58:30.177250  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.177263  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:30.177272  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:30.177344  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:30.210808  276553 cri.go:89] found id: ""
	I1216 11:58:30.210846  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.210858  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:30.210867  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:30.210943  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:30.243895  276553 cri.go:89] found id: ""
	I1216 11:58:30.243935  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.243947  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:30.243955  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:30.244026  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:30.282295  276553 cri.go:89] found id: ""
	I1216 11:58:30.282335  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.282347  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:30.282355  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:30.282424  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:30.325096  276553 cri.go:89] found id: ""
	I1216 11:58:30.325127  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.325137  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:30.325146  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:30.325223  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:30.368651  276553 cri.go:89] found id: ""
	I1216 11:58:30.368688  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.368702  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:30.368715  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:30.368732  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:30.429442  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:30.429481  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:30.447157  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:30.447197  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:30.525823  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:30.525851  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:30.525876  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:30.619321  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:30.619374  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:31.365838  279095 crio.go:462] duration metric: took 1.249888265s to copy over tarball
	I1216 11:58:31.365939  279095 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 11:58:33.464744  279095 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.098774355s)
	I1216 11:58:33.464768  279095 crio.go:469] duration metric: took 2.098894697s to extract the tarball
	I1216 11:58:33.464775  279095 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 11:58:33.502605  279095 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:58:33.552519  279095 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 11:58:33.552546  279095 cache_images.go:84] Images are preloaded, skipping loading
	I1216 11:58:33.552564  279095 kubeadm.go:934] updating node { 192.168.39.202 8443 v1.31.2 crio true true} ...
	I1216 11:58:33.552695  279095 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-409154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 11:58:33.552789  279095 ssh_runner.go:195] Run: crio config
	I1216 11:58:33.599280  279095 cni.go:84] Creating CNI manager for ""
	I1216 11:58:33.599316  279095 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:58:33.599330  279095 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1216 11:58:33.599369  279095 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-409154 NodeName:newest-cni-409154 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 11:58:33.599559  279095 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-409154"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.202"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 11:58:33.599635  279095 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1216 11:58:33.611454  279095 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 11:58:33.611560  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 11:58:33.620442  279095 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1216 11:58:33.636061  279095 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 11:58:33.651452  279095 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1216 11:58:33.667434  279095 ssh_runner.go:195] Run: grep 192.168.39.202	control-plane.minikube.internal$ /etc/hosts
	I1216 11:58:33.672022  279095 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:58:33.688407  279095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:58:33.825530  279095 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:58:33.842084  279095 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154 for IP: 192.168.39.202
	I1216 11:58:33.842119  279095 certs.go:194] generating shared ca certs ...
	I1216 11:58:33.842143  279095 certs.go:226] acquiring lock for ca certs: {Name:mkd97e0b0fea726104cb674caf81491f1144961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:33.842348  279095 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key
	I1216 11:58:33.842417  279095 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key
	I1216 11:58:33.842433  279095 certs.go:256] generating profile certs ...
	I1216 11:58:33.842546  279095 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/client.key
	I1216 11:58:33.842651  279095 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/apiserver.key.4b1f7a67
	I1216 11:58:33.842714  279095 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/proxy-client.key
	I1216 11:58:33.842887  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem (1338 bytes)
	W1216 11:58:33.842940  279095 certs.go:480] ignoring /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519_empty.pem, impossibly tiny 0 bytes
	I1216 11:58:33.842954  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 11:58:33.842995  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem (1078 bytes)
	I1216 11:58:33.843034  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem (1123 bytes)
	I1216 11:58:33.843080  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem (1679 bytes)
	I1216 11:58:33.843153  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:58:33.843887  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 11:58:33.888237  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 11:58:33.922983  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 11:58:33.947106  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 11:58:33.979827  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 11:58:34.006341  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 11:58:34.029912  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 11:58:34.052408  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 11:58:34.074341  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 11:58:34.096314  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem --> /usr/share/ca-certificates/217519.pem (1338 bytes)
	I1216 11:58:34.117813  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /usr/share/ca-certificates/2175192.pem (1708 bytes)
	I1216 11:58:34.139265  279095 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 11:58:34.154749  279095 ssh_runner.go:195] Run: openssl version
	I1216 11:58:34.160150  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 11:58:34.170031  279095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:58:34.174128  279095 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:58:34.174192  279095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:58:34.179382  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 11:58:34.189755  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/217519.pem && ln -fs /usr/share/ca-certificates/217519.pem /etc/ssl/certs/217519.pem"
	I1216 11:58:34.200079  279095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/217519.pem
	I1216 11:58:34.204422  279095 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 10:42 /usr/share/ca-certificates/217519.pem
	I1216 11:58:34.204483  279095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/217519.pem
	I1216 11:58:34.210007  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/217519.pem /etc/ssl/certs/51391683.0"
	I1216 11:58:34.219577  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2175192.pem && ln -fs /usr/share/ca-certificates/2175192.pem /etc/ssl/certs/2175192.pem"
	I1216 11:58:34.229612  279095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2175192.pem
	I1216 11:58:34.233804  279095 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 10:42 /usr/share/ca-certificates/2175192.pem
	I1216 11:58:34.233855  279095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2175192.pem
	I1216 11:58:34.239357  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2175192.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 11:58:34.249593  279095 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 11:58:34.253857  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 11:58:34.259667  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 11:58:34.265350  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 11:58:34.271063  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 11:58:34.276571  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 11:58:34.282052  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 11:58:34.287542  279095 kubeadm.go:392] StartCluster: {Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil
> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:58:34.287635  279095 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 11:58:34.287698  279095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:58:34.330701  279095 cri.go:89] found id: ""
	I1216 11:58:34.330766  279095 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 11:58:34.340500  279095 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 11:58:34.340523  279095 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 11:58:34.340563  279095 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 11:58:34.351292  279095 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 11:58:34.351877  279095 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-409154" does not appear in /home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:58:34.352074  279095 kubeconfig.go:62] /home/jenkins/minikube-integration/20107-210204/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-409154" cluster setting kubeconfig missing "newest-cni-409154" context setting]
	I1216 11:58:34.352501  279095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/kubeconfig: {Name:mk57b3cf865454cf7d68479ab0b998d52a5c91e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:34.353808  279095 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 11:58:34.363101  279095 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.202
	I1216 11:58:34.363144  279095 kubeadm.go:1160] stopping kube-system containers ...
	I1216 11:58:34.363157  279095 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 11:58:34.363210  279095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:58:34.397341  279095 cri.go:89] found id: ""
	I1216 11:58:34.397410  279095 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 11:58:34.412614  279095 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:58:34.421801  279095 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:58:34.421830  279095 kubeadm.go:157] found existing configuration files:
	
	I1216 11:58:34.421890  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:58:34.430246  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:58:34.430309  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:58:34.438808  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:58:34.447241  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:58:34.447315  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:58:34.456064  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:58:34.464112  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:58:34.464179  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:58:34.472719  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:58:34.481088  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:58:34.481162  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:58:34.489902  279095 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:58:34.499478  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:34.600562  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:33.167369  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:33.180007  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:33.180135  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:33.216102  276553 cri.go:89] found id: ""
	I1216 11:58:33.216139  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.216149  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:33.216156  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:33.216219  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:33.264290  276553 cri.go:89] found id: ""
	I1216 11:58:33.264331  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.264351  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:33.264360  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:33.264428  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:33.307400  276553 cri.go:89] found id: ""
	I1216 11:58:33.307440  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.307452  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:33.307461  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:33.307528  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:33.348555  276553 cri.go:89] found id: ""
	I1216 11:58:33.348597  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.348610  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:33.348619  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:33.348688  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:33.385255  276553 cri.go:89] found id: ""
	I1216 11:58:33.385286  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.385296  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:33.385303  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:33.385366  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:33.422656  276553 cri.go:89] found id: ""
	I1216 11:58:33.422701  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.422713  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:33.422722  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:33.422783  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:33.461547  276553 cri.go:89] found id: ""
	I1216 11:58:33.461582  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.461591  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:33.461601  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:33.461651  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:33.496893  276553 cri.go:89] found id: ""
	I1216 11:58:33.496935  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.496948  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:33.496987  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:33.497003  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:33.510577  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:33.510609  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:33.579037  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:33.579064  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:33.579080  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:33.657142  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:33.657178  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:33.703963  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:33.703993  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:36.255123  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:36.269198  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:36.269265  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:36.302149  276553 cri.go:89] found id: ""
	I1216 11:58:36.302189  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.302202  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:36.302210  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:36.302278  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:36.334332  276553 cri.go:89] found id: ""
	I1216 11:58:36.334367  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.334378  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:36.334386  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:36.334478  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:36.367219  276553 cri.go:89] found id: ""
	I1216 11:58:36.367251  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.367262  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:36.367271  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:36.367346  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:36.409111  276553 cri.go:89] found id: ""
	I1216 11:58:36.409142  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.409154  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:36.409162  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:36.409235  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:36.453572  276553 cri.go:89] found id: ""
	I1216 11:58:36.453612  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.453624  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:36.453639  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:36.453713  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:36.498382  276553 cri.go:89] found id: ""
	I1216 11:58:36.498420  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.498430  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:36.498445  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:36.498516  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:36.533177  276553 cri.go:89] found id: ""
	I1216 11:58:36.533213  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.533225  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:36.533234  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:36.533315  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:36.568180  276553 cri.go:89] found id: ""
	I1216 11:58:36.568219  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.568232  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:36.568247  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:36.568263  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:36.631684  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:36.631732  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:36.646177  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:36.646219  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:36.715265  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:36.715298  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:36.715360  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:36.795141  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:36.795187  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:35.572311  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:35.786524  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:35.872020  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:35.964712  279095 api_server.go:52] waiting for apiserver process to appear ...
	I1216 11:58:35.964813  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:36.465153  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:36.965020  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:37.465530  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:37.965157  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:38.465454  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:38.479784  279095 api_server.go:72] duration metric: took 2.515071544s to wait for apiserver process to appear ...
	I1216 11:58:38.479821  279095 api_server.go:88] waiting for apiserver healthz status ...
	I1216 11:58:38.479849  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.266917  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 11:58:40.266944  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 11:58:40.266957  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.277079  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 11:58:40.277107  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 11:58:40.480677  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.486236  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:58:40.486263  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:58:40.979982  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.987028  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:58:40.987054  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:58:41.480764  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:41.487009  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:58:41.487037  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:58:41.980637  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:41.985077  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1216 11:58:41.991955  279095 api_server.go:141] control plane version: v1.31.2
	I1216 11:58:41.991987  279095 api_server.go:131] duration metric: took 3.512159263s to wait for apiserver health ...
	I1216 11:58:41.991997  279095 cni.go:84] Creating CNI manager for ""
	I1216 11:58:41.992003  279095 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:58:41.993731  279095 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 11:58:41.994974  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 11:58:42.005415  279095 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 11:58:42.022839  279095 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 11:58:42.033438  279095 system_pods.go:59] 8 kube-system pods found
	I1216 11:58:42.033476  279095 system_pods.go:61] "coredns-7c65d6cfc9-cvvpl" [2962f6fc-40be-45f2-9096-aadecacdc0b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 11:58:42.033486  279095 system_pods.go:61] "etcd-newest-cni-409154" [de9adbc9-3837-4367-b60c-0a28b5675107] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 11:58:42.033499  279095 system_pods.go:61] "kube-apiserver-newest-cni-409154" [62b9084c-0af2-4592-824c-8769ed71635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 11:58:42.033508  279095 system_pods.go:61] "kube-controller-manager-newest-cni-409154" [bde6fb21-32c3-4da8-909b-20b8b64f6e36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 11:58:42.033521  279095 system_pods.go:61] "kube-proxy-nz6pz" [703e409a-1ced-4b28-babc-6b8d233b9fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 11:58:42.033534  279095 system_pods.go:61] "kube-scheduler-newest-cni-409154" [850ced2f-6dae-40ef-ab2a-c5c66c5cd177] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 11:58:42.033551  279095 system_pods.go:61] "metrics-server-6867b74b74-9tg4t" [ba1ea4d4-8b10-41c3-944a-831cee9abc82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 11:58:42.033563  279095 system_pods.go:61] "storage-provisioner" [b3f6ed1a-bb74-4a9f-81f4-8ee598480fd9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 11:58:42.033575  279095 system_pods.go:74] duration metric: took 10.70808ms to wait for pod list to return data ...
	I1216 11:58:42.033585  279095 node_conditions.go:102] verifying NodePressure condition ...
	I1216 11:58:42.036820  279095 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 11:58:42.036844  279095 node_conditions.go:123] node cpu capacity is 2
	I1216 11:58:42.036875  279095 node_conditions.go:105] duration metric: took 3.281402ms to run NodePressure ...
	I1216 11:58:42.036900  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:42.327663  279095 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 11:58:42.339587  279095 ops.go:34] apiserver oom_adj: -16
	I1216 11:58:42.339616  279095 kubeadm.go:597] duration metric: took 7.999086573s to restartPrimaryControlPlane
	I1216 11:58:42.339627  279095 kubeadm.go:394] duration metric: took 8.052090671s to StartCluster
	I1216 11:58:42.339674  279095 settings.go:142] acquiring lock: {Name:mk3e7a5ea729d05e36deedcd45fd7829ccafa72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:42.339767  279095 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:58:42.340896  279095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/kubeconfig: {Name:mk57b3cf865454cf7d68479ab0b998d52a5c91e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:42.341317  279095 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 11:58:42.341358  279095 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 11:58:42.341468  279095 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-409154"
	I1216 11:58:42.341493  279095 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-409154"
	W1216 11:58:42.341502  279095 addons.go:243] addon storage-provisioner should already be in state true
	I1216 11:58:42.341525  279095 addons.go:69] Setting default-storageclass=true in profile "newest-cni-409154"
	I1216 11:58:42.341546  279095 addons.go:69] Setting dashboard=true in profile "newest-cni-409154"
	I1216 11:58:42.341534  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.341554  279095 config.go:182] Loaded profile config "newest-cni-409154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:58:42.341562  279095 addons.go:69] Setting metrics-server=true in profile "newest-cni-409154"
	I1216 11:58:42.341602  279095 addons.go:234] Setting addon metrics-server=true in "newest-cni-409154"
	W1216 11:58:42.341613  279095 addons.go:243] addon metrics-server should already be in state true
	I1216 11:58:42.341550  279095 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-409154"
	I1216 11:58:42.341668  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.341562  279095 addons.go:234] Setting addon dashboard=true in "newest-cni-409154"
	W1216 11:58:42.341766  279095 addons.go:243] addon dashboard should already be in state true
	I1216 11:58:42.341812  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.342033  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342055  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342066  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342065  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342085  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342107  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342207  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342230  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342910  279095 out.go:177] * Verifying Kubernetes components...
	I1216 11:58:42.344377  279095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:58:42.358561  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34565
	I1216 11:58:42.359188  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.359817  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.359841  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.360254  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.360504  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.362469  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37203
	I1216 11:58:42.362503  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I1216 11:58:42.362558  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42579
	I1216 11:58:42.362857  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.363000  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.363324  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.363351  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.363627  279095 addons.go:234] Setting addon default-storageclass=true in "newest-cni-409154"
	W1216 11:58:42.363647  279095 addons.go:243] addon default-storageclass should already be in state true
	I1216 11:58:42.363681  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.363730  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.363865  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.363890  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.363979  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.364019  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.364264  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.364300  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.364330  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.364468  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.364811  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.364857  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.365039  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.365061  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.365659  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.366150  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.366193  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.379564  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I1216 11:58:42.383427  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
	I1216 11:58:42.384214  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43403
	I1216 11:58:42.389453  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.389476  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.389687  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.389977  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.389995  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.390001  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.390016  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.390284  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.390308  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.390402  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.390497  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.390711  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.390766  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.390961  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.390969  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.391003  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.392531  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.393754  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.394494  279095 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1216 11:58:42.395267  279095 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 11:58:42.396422  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 11:58:42.396441  279095 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 11:58:42.396457  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.397661  279095 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 11:58:42.398785  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 11:58:42.398802  279095 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 11:58:42.398822  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.399817  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.400305  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.400328  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.400531  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.400690  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.400848  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.401130  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.402248  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.402677  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.402705  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.402899  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.403091  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.403235  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.403367  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.409172  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I1216 11:58:42.410026  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.410606  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.410625  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.410698  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I1216 11:58:42.411056  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.411179  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.411268  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.411636  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.411653  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.412245  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.412420  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.413415  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.413723  279095 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 11:58:42.413739  279095 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 11:58:42.413757  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.414236  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.415933  279095 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:58:39.333144  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:39.345528  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:39.345605  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:39.380984  276553 cri.go:89] found id: ""
	I1216 11:58:39.381022  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.381042  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:39.381050  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:39.381116  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:39.414143  276553 cri.go:89] found id: ""
	I1216 11:58:39.414179  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.414192  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:39.414200  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:39.414271  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:39.451080  276553 cri.go:89] found id: ""
	I1216 11:58:39.451113  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.451124  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:39.451133  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:39.451194  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:39.486555  276553 cri.go:89] found id: ""
	I1216 11:58:39.486585  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.486593  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:39.486599  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:39.486653  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:39.519626  276553 cri.go:89] found id: ""
	I1216 11:58:39.519663  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.519676  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:39.519683  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:39.519747  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:39.551678  276553 cri.go:89] found id: ""
	I1216 11:58:39.551717  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.551729  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:39.551736  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:39.551793  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:39.585498  276553 cri.go:89] found id: ""
	I1216 11:58:39.585536  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.585548  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:39.585556  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:39.585634  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:39.619904  276553 cri.go:89] found id: ""
	I1216 11:58:39.619941  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.619952  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:39.619967  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:39.619989  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:39.698641  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:39.698673  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:39.698690  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:39.790153  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:39.790199  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:39.836401  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:39.836438  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:39.887171  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:39.887217  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:42.400773  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:42.424070  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:42.424127  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:42.467053  276553 cri.go:89] found id: ""
	I1216 11:58:42.467092  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.467103  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:42.467110  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:42.467171  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:42.510214  276553 cri.go:89] found id: ""
	I1216 11:58:42.510248  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.510260  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:42.510268  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:42.510328  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:42.553938  276553 cri.go:89] found id: ""
	I1216 11:58:42.553974  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.553986  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:42.553994  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:42.554058  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:42.595174  276553 cri.go:89] found id: ""
	I1216 11:58:42.595208  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.595220  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:42.595228  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:42.595293  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:42.631184  276553 cri.go:89] found id: ""
	I1216 11:58:42.631219  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.631231  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:42.631240  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:42.631300  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:42.665302  276553 cri.go:89] found id: ""
	I1216 11:58:42.665328  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.665338  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:42.665346  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:42.665396  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:42.702222  276553 cri.go:89] found id: ""
	I1216 11:58:42.702249  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.702257  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:42.702263  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:42.702311  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:42.735627  276553 cri.go:89] found id: ""
	I1216 11:58:42.735658  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.735667  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:42.735676  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:42.735688  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:42.786111  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:42.786144  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:42.803378  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:42.803413  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:42.882160  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:42.882190  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:42.882207  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:42.969671  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:42.969707  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:42.416975  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.417169  279095 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:58:42.417184  279095 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 11:58:42.417201  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.417684  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.417713  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.417898  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.418090  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.418227  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.418322  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.420259  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.420651  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.420679  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.420818  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.420977  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.421115  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.421227  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.598988  279095 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:58:42.619954  279095 api_server.go:52] waiting for apiserver process to appear ...
	I1216 11:58:42.620059  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:42.638426  279095 api_server.go:72] duration metric: took 297.04949ms to wait for apiserver process to appear ...
	I1216 11:58:42.638459  279095 api_server.go:88] waiting for apiserver healthz status ...
	I1216 11:58:42.638487  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:42.645697  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1216 11:58:42.647451  279095 api_server.go:141] control plane version: v1.31.2
	I1216 11:58:42.647484  279095 api_server.go:131] duration metric: took 9.015381ms to wait for apiserver health ...
	I1216 11:58:42.647495  279095 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 11:58:42.653389  279095 system_pods.go:59] 8 kube-system pods found
	I1216 11:58:42.653419  279095 system_pods.go:61] "coredns-7c65d6cfc9-cvvpl" [2962f6fc-40be-45f2-9096-aadecacdc0b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 11:58:42.653427  279095 system_pods.go:61] "etcd-newest-cni-409154" [de9adbc9-3837-4367-b60c-0a28b5675107] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 11:58:42.653437  279095 system_pods.go:61] "kube-apiserver-newest-cni-409154" [62b9084c-0af2-4592-824c-8769ed71635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 11:58:42.653443  279095 system_pods.go:61] "kube-controller-manager-newest-cni-409154" [bde6fb21-32c3-4da8-909b-20b8b64f6e36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 11:58:42.653447  279095 system_pods.go:61] "kube-proxy-nz6pz" [703e409a-1ced-4b28-babc-6b8d233b9fcd] Running
	I1216 11:58:42.653452  279095 system_pods.go:61] "kube-scheduler-newest-cni-409154" [850ced2f-6dae-40ef-ab2a-c5c66c5cd177] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 11:58:42.653458  279095 system_pods.go:61] "metrics-server-6867b74b74-9tg4t" [ba1ea4d4-8b10-41c3-944a-831cee9abc82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 11:58:42.653464  279095 system_pods.go:61] "storage-provisioner" [b3f6ed1a-bb74-4a9f-81f4-8ee598480fd9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 11:58:42.653473  279095 system_pods.go:74] duration metric: took 5.971424ms to wait for pod list to return data ...
	I1216 11:58:42.653482  279095 default_sa.go:34] waiting for default service account to be created ...
	I1216 11:58:42.656290  279095 default_sa.go:45] found service account: "default"
	I1216 11:58:42.656311  279095 default_sa.go:55] duration metric: took 2.821034ms for default service account to be created ...
	I1216 11:58:42.656325  279095 kubeadm.go:582] duration metric: took 314.954393ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 11:58:42.656346  279095 node_conditions.go:102] verifying NodePressure condition ...
	I1216 11:58:42.659184  279095 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 11:58:42.659211  279095 node_conditions.go:123] node cpu capacity is 2
	I1216 11:58:42.659224  279095 node_conditions.go:105] duration metric: took 2.872931ms to run NodePressure ...
	I1216 11:58:42.659239  279095 start.go:241] waiting for startup goroutines ...
	I1216 11:58:42.718023  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 11:58:42.718054  279095 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 11:58:42.720098  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 11:58:42.720117  279095 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 11:58:42.761050  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:58:42.762948  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 11:58:42.772260  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 11:58:42.772281  279095 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 11:58:42.776710  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 11:58:42.776742  279095 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 11:58:42.815042  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 11:58:42.815075  279095 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 11:58:42.847205  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 11:58:42.847233  279095 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 11:58:42.858645  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 11:58:42.858702  279095 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 11:58:42.880891  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 11:58:42.880928  279095 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 11:58:42.901442  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 11:58:42.952713  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 11:58:42.952751  279095 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 11:58:43.107941  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 11:58:43.107984  279095 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 11:58:43.130360  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 11:58:43.130386  279095 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 11:58:43.190120  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 11:58:43.190147  279095 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 11:58:43.217576  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 11:58:44.705014  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.942029783s)
	I1216 11:58:44.705086  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705103  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705109  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.803618622s)
	I1216 11:58:44.705121  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.944038036s)
	I1216 11:58:44.705147  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705162  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705162  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705211  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705465  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.705510  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705518  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705534  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705548  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705644  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705658  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705696  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705708  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705716  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705728  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705739  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705717  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705733  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.705955  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705971  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.706021  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.706032  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.707564  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.707608  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.707647  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.707659  279095 addons.go:475] Verifying addon metrics-server=true in "newest-cni-409154"
	I1216 11:58:44.733968  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.733996  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.734329  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.734355  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.734356  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.986437  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.76879955s)
	I1216 11:58:44.986505  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.986524  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.986925  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.986948  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.986958  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.986966  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.987212  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.987234  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.988962  279095 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-409154 addons enable metrics-server
	
	I1216 11:58:44.990322  279095 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1216 11:58:44.991523  279095 addons.go:510] duration metric: took 2.650165363s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1216 11:58:44.991578  279095 start.go:246] waiting for cluster config update ...
	I1216 11:58:44.991599  279095 start.go:255] writing updated cluster config ...
	I1216 11:58:44.991876  279095 ssh_runner.go:195] Run: rm -f paused
	I1216 11:58:45.051986  279095 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1216 11:58:45.053871  279095 out.go:177] * Done! kubectl is now configured to use "newest-cni-409154" cluster and "default" namespace by default
	I1216 11:58:45.512113  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:45.529025  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:45.529084  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:45.563665  276553 cri.go:89] found id: ""
	I1216 11:58:45.563697  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.563708  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:45.563717  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:45.563776  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:45.596079  276553 cri.go:89] found id: ""
	I1216 11:58:45.596119  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.596132  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:45.596140  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:45.596202  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:45.629014  276553 cri.go:89] found id: ""
	I1216 11:58:45.629042  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.629055  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:45.629062  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:45.629128  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:45.671688  276553 cri.go:89] found id: ""
	I1216 11:58:45.671714  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.671725  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:45.671733  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:45.671788  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:45.711944  276553 cri.go:89] found id: ""
	I1216 11:58:45.711977  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.711987  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:45.711994  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:45.712046  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:45.752121  276553 cri.go:89] found id: ""
	I1216 11:58:45.752155  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.752164  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:45.752170  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:45.752230  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:45.785470  276553 cri.go:89] found id: ""
	I1216 11:58:45.785499  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.785510  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:45.785518  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:45.785576  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:45.819346  276553 cri.go:89] found id: ""
	I1216 11:58:45.819374  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.819387  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:45.819399  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:45.819414  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:45.855153  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:45.855199  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:45.906709  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:45.906745  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:45.919757  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:45.919788  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:45.984752  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:45.984779  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:45.984798  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:48.559896  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:48.572393  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:48.572475  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:48.603458  276553 cri.go:89] found id: ""
	I1216 11:58:48.603496  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.603508  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:48.603516  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:48.603582  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:48.639883  276553 cri.go:89] found id: ""
	I1216 11:58:48.639920  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.639931  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:48.639938  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:48.640065  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:48.671045  276553 cri.go:89] found id: ""
	I1216 11:58:48.671070  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.671079  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:48.671085  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:48.671152  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:48.703295  276553 cri.go:89] found id: ""
	I1216 11:58:48.703341  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.703351  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:48.703360  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:48.703428  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:48.736411  276553 cri.go:89] found id: ""
	I1216 11:58:48.736442  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.736451  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:48.736457  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:48.736514  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:48.767332  276553 cri.go:89] found id: ""
	I1216 11:58:48.767375  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.767387  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:48.767396  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:48.767461  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:48.800080  276553 cri.go:89] found id: ""
	I1216 11:58:48.800112  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.800123  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:48.800131  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:48.800197  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:48.832760  276553 cri.go:89] found id: ""
	I1216 11:58:48.832802  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.832814  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:48.832826  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:48.832845  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:48.848815  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:48.848855  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:48.930771  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:48.930794  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:48.930808  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:49.005468  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:49.005511  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:49.040128  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:49.040166  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:51.591281  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:51.603590  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:51.603672  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:51.634226  276553 cri.go:89] found id: ""
	I1216 11:58:51.634255  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.634263  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:51.634270  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:51.634324  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:51.665685  276553 cri.go:89] found id: ""
	I1216 11:58:51.665718  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.665726  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:51.665732  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:51.665783  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:51.697159  276553 cri.go:89] found id: ""
	I1216 11:58:51.697192  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.697200  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:51.697206  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:51.697255  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:51.729513  276553 cri.go:89] found id: ""
	I1216 11:58:51.729543  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.729551  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:51.729556  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:51.729611  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:51.760525  276553 cri.go:89] found id: ""
	I1216 11:58:51.760559  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.760568  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:51.760574  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:51.760634  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:51.791787  276553 cri.go:89] found id: ""
	I1216 11:58:51.791824  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.791835  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:51.791844  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:51.791897  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:51.823131  276553 cri.go:89] found id: ""
	I1216 11:58:51.823166  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.823177  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:51.823186  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:51.823258  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:51.854638  276553 cri.go:89] found id: ""
	I1216 11:58:51.854675  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.854688  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:51.854699  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:51.854720  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:51.903207  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:51.903247  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:51.916182  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:51.916210  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:51.978879  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:51.978906  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:51.978918  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:52.054050  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:52.054087  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:54.592784  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:54.606444  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:54.606511  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:54.641053  276553 cri.go:89] found id: ""
	I1216 11:58:54.641094  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.641106  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:54.641114  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:54.641194  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:54.672984  276553 cri.go:89] found id: ""
	I1216 11:58:54.673018  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.673027  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:54.673032  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:54.673081  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:54.705118  276553 cri.go:89] found id: ""
	I1216 11:58:54.705144  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.705153  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:54.705159  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:54.705210  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:54.735744  276553 cri.go:89] found id: ""
	I1216 11:58:54.735778  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.735791  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:54.735798  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:54.735851  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:54.767983  276553 cri.go:89] found id: ""
	I1216 11:58:54.768012  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.768020  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:54.768027  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:54.768076  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:54.799412  276553 cri.go:89] found id: ""
	I1216 11:58:54.799440  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.799448  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:54.799455  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:54.799506  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:54.830329  276553 cri.go:89] found id: ""
	I1216 11:58:54.830357  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.830365  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:54.830371  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:54.830421  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:54.861544  276553 cri.go:89] found id: ""
	I1216 11:58:54.861573  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.861583  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:54.861593  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:54.861606  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:54.911522  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:54.911562  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:54.923947  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:54.923980  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:55.000816  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:55.000838  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:55.000854  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:55.072803  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:55.072845  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:57.608748  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:57.622071  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:57.622149  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:57.653826  276553 cri.go:89] found id: ""
	I1216 11:58:57.653863  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.653876  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:57.653885  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:57.653946  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:57.686809  276553 cri.go:89] found id: ""
	I1216 11:58:57.686839  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.686852  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:57.686860  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:57.686931  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:57.719565  276553 cri.go:89] found id: ""
	I1216 11:58:57.719601  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.719613  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:57.719622  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:57.719676  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:57.752279  276553 cri.go:89] found id: ""
	I1216 11:58:57.752318  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.752330  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:57.752339  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:57.752403  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:57.785915  276553 cri.go:89] found id: ""
	I1216 11:58:57.785949  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.785961  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:57.785969  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:57.786039  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:57.818703  276553 cri.go:89] found id: ""
	I1216 11:58:57.818734  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.818748  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:57.818754  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:57.818821  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:57.856323  276553 cri.go:89] found id: ""
	I1216 11:58:57.856362  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.856371  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:57.856377  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:57.856431  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:57.888461  276553 cri.go:89] found id: ""
	I1216 11:58:57.888507  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.888515  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:57.888526  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:57.888543  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:57.924744  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:57.924783  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:57.974915  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:57.974952  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:57.987702  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:57.987737  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:58.047740  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:58.047764  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:58.047779  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:59:00.624270  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:59:00.636790  276553 kubeadm.go:597] duration metric: took 4m2.920412851s to restartPrimaryControlPlane
	W1216 11:59:00.636868  276553 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 11:59:00.636890  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 11:59:01.078876  276553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:59:01.092675  276553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:59:01.102060  276553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:59:01.111330  276553 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:59:01.111353  276553 kubeadm.go:157] found existing configuration files:
	
	I1216 11:59:01.111396  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:59:01.120045  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:59:01.120110  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:59:01.128974  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:59:01.137554  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:59:01.137630  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:59:01.146493  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:59:01.154841  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:59:01.154904  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:59:01.163934  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:59:01.172584  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:59:01.172637  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:59:01.181391  276553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 11:59:01.369411  276553 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:00:57.257269  276553 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 12:00:57.257376  276553 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 12:00:57.258891  276553 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 12:00:57.258974  276553 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:00:57.259041  276553 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:00:57.259123  276553 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:00:57.259218  276553 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:00:57.259321  276553 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:00:57.262146  276553 out.go:235]   - Generating certificates and keys ...
	I1216 12:00:57.262267  276553 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:00:57.262347  276553 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:00:57.262465  276553 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:00:57.262571  276553 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:00:57.262667  276553 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:00:57.262717  276553 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:00:57.262791  276553 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:00:57.262860  276553 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:00:57.262924  276553 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:00:57.262996  276553 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:00:57.263030  276553 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:00:57.263084  276553 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:00:57.263135  276553 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:00:57.263181  276553 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:00:57.263235  276553 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:00:57.263281  276553 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:00:57.263373  276553 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:00:57.263445  276553 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:00:57.263481  276553 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:00:57.263542  276553 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:00:57.265255  276553 out.go:235]   - Booting up control plane ...
	I1216 12:00:57.265379  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:00:57.265453  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:00:57.265511  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:00:57.265629  276553 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:00:57.265768  276553 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:00:57.265811  276553 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 12:00:57.265917  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266078  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266159  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266350  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266437  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266649  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266712  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266895  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266973  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.267138  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.267150  276553 kubeadm.go:310] 
	I1216 12:00:57.267214  276553 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 12:00:57.267271  276553 kubeadm.go:310] 		timed out waiting for the condition
	I1216 12:00:57.267281  276553 kubeadm.go:310] 
	I1216 12:00:57.267334  276553 kubeadm.go:310] 	This error is likely caused by:
	I1216 12:00:57.267378  276553 kubeadm.go:310] 		- The kubelet is not running
	I1216 12:00:57.267488  276553 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 12:00:57.267499  276553 kubeadm.go:310] 
	I1216 12:00:57.267604  276553 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 12:00:57.267659  276553 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 12:00:57.267700  276553 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 12:00:57.267716  276553 kubeadm.go:310] 
	I1216 12:00:57.267867  276553 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 12:00:57.267965  276553 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 12:00:57.267976  276553 kubeadm.go:310] 
	I1216 12:00:57.268074  276553 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 12:00:57.268144  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 12:00:57.268210  276553 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 12:00:57.268279  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 12:00:57.268328  276553 kubeadm.go:310] 
	W1216 12:00:57.268428  276553 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 12:00:57.268489  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 12:00:57.717860  276553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 12:00:57.733963  276553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 12:00:57.744259  276553 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 12:00:57.744288  276553 kubeadm.go:157] found existing configuration files:
	
	I1216 12:00:57.744336  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 12:00:57.753893  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 12:00:57.753977  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 12:00:57.764071  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 12:00:57.773595  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 12:00:57.773682  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 12:00:57.783828  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 12:00:57.793769  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 12:00:57.793839  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 12:00:57.803766  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 12:00:57.813437  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 12:00:57.813513  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 12:00:57.823881  276553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 12:00:57.888749  276553 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 12:00:57.888835  276553 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:00:58.038785  276553 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:00:58.038916  276553 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:00:58.039088  276553 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:00:58.223884  276553 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:00:58.225611  276553 out.go:235]   - Generating certificates and keys ...
	I1216 12:00:58.225731  276553 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:00:58.225852  276553 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:00:58.225980  276553 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:00:58.226074  276553 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:00:58.226178  276553 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:00:58.226255  276553 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:00:58.226344  276553 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:00:58.226424  276553 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:00:58.226551  276553 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:00:58.226688  276553 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:00:58.226756  276553 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:00:58.226821  276553 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:00:58.353567  276553 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:00:58.694503  276553 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:00:58.792660  276553 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:00:59.086043  276553 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:00:59.108391  276553 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:00:59.108558  276553 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:00:59.108623  276553 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:00:59.247927  276553 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:00:59.249627  276553 out.go:235]   - Booting up control plane ...
	I1216 12:00:59.249774  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:00:59.251436  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:00:59.254163  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:00:59.257479  276553 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:00:59.261730  276553 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:01:39.263454  276553 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 12:01:39.263569  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:39.263847  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:01:44.264678  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:44.264927  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:01:54.265352  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:54.265639  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:14.265999  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:02:14.266235  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:54.265070  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:02:54.265312  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:54.265327  276553 kubeadm.go:310] 
	I1216 12:02:54.265385  276553 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 12:02:54.265445  276553 kubeadm.go:310] 		timed out waiting for the condition
	I1216 12:02:54.265455  276553 kubeadm.go:310] 
	I1216 12:02:54.265515  276553 kubeadm.go:310] 	This error is likely caused by:
	I1216 12:02:54.265563  276553 kubeadm.go:310] 		- The kubelet is not running
	I1216 12:02:54.265722  276553 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 12:02:54.265750  276553 kubeadm.go:310] 
	I1216 12:02:54.265890  276553 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 12:02:54.265936  276553 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 12:02:54.265973  276553 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 12:02:54.265995  276553 kubeadm.go:310] 
	I1216 12:02:54.266136  276553 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 12:02:54.266255  276553 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 12:02:54.266265  276553 kubeadm.go:310] 
	I1216 12:02:54.266405  276553 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 12:02:54.266530  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 12:02:54.266638  276553 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 12:02:54.266729  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 12:02:54.266748  276553 kubeadm.go:310] 
	I1216 12:02:54.267271  276553 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:02:54.267355  276553 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 12:02:54.267426  276553 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 12:02:54.267491  276553 kubeadm.go:394] duration metric: took 7m56.598620484s to StartCluster
	I1216 12:02:54.267542  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 12:02:54.267613  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 12:02:54.301812  276553 cri.go:89] found id: ""
	I1216 12:02:54.301847  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.301855  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 12:02:54.301863  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 12:02:54.301917  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 12:02:54.334730  276553 cri.go:89] found id: ""
	I1216 12:02:54.334768  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.334780  276553 logs.go:284] No container was found matching "etcd"
	I1216 12:02:54.334788  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 12:02:54.334853  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 12:02:54.366080  276553 cri.go:89] found id: ""
	I1216 12:02:54.366115  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.366128  276553 logs.go:284] No container was found matching "coredns"
	I1216 12:02:54.366136  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 12:02:54.366202  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 12:02:54.396447  276553 cri.go:89] found id: ""
	I1216 12:02:54.396483  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.396495  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 12:02:54.396503  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 12:02:54.396584  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 12:02:54.429291  276553 cri.go:89] found id: ""
	I1216 12:02:54.429326  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.429337  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 12:02:54.429345  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 12:02:54.429409  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 12:02:54.460235  276553 cri.go:89] found id: ""
	I1216 12:02:54.460268  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.460276  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 12:02:54.460283  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 12:02:54.460334  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 12:02:54.492739  276553 cri.go:89] found id: ""
	I1216 12:02:54.492771  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.492780  276553 logs.go:284] No container was found matching "kindnet"
	I1216 12:02:54.492787  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 12:02:54.492840  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 12:02:54.524322  276553 cri.go:89] found id: ""
	I1216 12:02:54.524358  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.524369  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 12:02:54.524384  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 12:02:54.524400  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:02:54.575979  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 12:02:54.576022  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:02:54.591148  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:02:54.591184  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 12:02:54.704231  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 12:02:54.704259  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 12:02:54.704277  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 12:02:54.804001  276553 logs.go:123] Gathering logs for container status ...
	I1216 12:02:54.804047  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 12:02:54.842021  276553 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 12:02:54.842097  276553 out.go:270] * 
	W1216 12:02:54.842173  276553 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 12:02:54.842192  276553 out.go:270] * 
	W1216 12:02:54.843372  276553 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:02:54.847542  276553 out.go:201] 
	W1216 12:02:54.848991  276553 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 12:02:54.849037  276553 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 12:02:54.849054  276553 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 12:02:54.850514  276553 out.go:201] 
	
	
	==> CRI-O <==
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.430520761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734351117430499655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75642a6f-3a68-4eaf-ae11-693453b21722 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.430966861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=922191d3-8e4e-4c45-b54b-8211afb42740 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.431026153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=922191d3-8e4e-4c45-b54b-8211afb42740 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.431059481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=922191d3-8e4e-4c45-b54b-8211afb42740 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.459876226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=886af8b0-a36b-4c26-ac5e-9e16c63253cb name=/runtime.v1.RuntimeService/Version
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.460017780Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=886af8b0-a36b-4c26-ac5e-9e16c63253cb name=/runtime.v1.RuntimeService/Version
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.460821725Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5880486a-38e3-4f67-b636-bc254126f32d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.461206711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734351117461186569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5880486a-38e3-4f67-b636-bc254126f32d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.461644018Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad9083e9-cdd3-43cb-97ed-2deb35225105 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.461699244Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad9083e9-cdd3-43cb-97ed-2deb35225105 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.461730984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ad9083e9-cdd3-43cb-97ed-2deb35225105 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.491691601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3d99575-66c4-4f79-a6b2-23887a677307 name=/runtime.v1.RuntimeService/Version
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.491770105Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3d99575-66c4-4f79-a6b2-23887a677307 name=/runtime.v1.RuntimeService/Version
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.492629444Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=878257c1-6db5-47ca-b1ab-61a51a87d9e1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.493025588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734351117493003086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=878257c1-6db5-47ca-b1ab-61a51a87d9e1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.493615636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbd5d43e-da45-41ce-9122-1591ded52398 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.493663676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbd5d43e-da45-41ce-9122-1591ded52398 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.493692841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bbd5d43e-da45-41ce-9122-1591ded52398 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.529580733Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85add637-0c56-49d8-ab81-a792a34ac9cc name=/runtime.v1.RuntimeService/Version
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.529656255Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85add637-0c56-49d8-ab81-a792a34ac9cc name=/runtime.v1.RuntimeService/Version
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.530624914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55c98865-d19c-4878-8d2d-37b03a839174 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.531059678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734351117531035768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55c98865-d19c-4878-8d2d-37b03a839174 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.531529384Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7bc9142-fdcd-456b-88f4-63df5d34f4ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.531600608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7bc9142-fdcd-456b-88f4-63df5d34f4ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:11:57 old-k8s-version-933974 crio[633]: time="2024-12-16 12:11:57.531658530Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a7bc9142-fdcd-456b-88f4-63df5d34f4ec name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 11:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051890] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038428] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.993369] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.061080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.582390] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.514537] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.064570] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063777] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.185160] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.127891] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.247553] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.349244] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.059200] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.994875] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[Dec16 11:55] kauditd_printk_skb: 46 callbacks suppressed
	[Dec16 11:59] systemd-fstab-generator[5025]: Ignoring "noauto" option for root device
	[Dec16 12:00] systemd-fstab-generator[5310]: Ignoring "noauto" option for root device
	[  +0.068159] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:11:57 up 17 min,  0 users,  load average: 0.11, 0.05, 0.02
	Linux old-k8s-version-933974 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000cd9fe0, 0xc0004376e0)
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]: goroutine 149 [syscall]:
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]: syscall.Syscall6(0xe8, 0xd, 0xc00090fb6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc00090fb6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000b6a260, 0x0, 0x0, 0x0)
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc00046c690)
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Dec 16 12:11:54 old-k8s-version-933974 kubelet[6494]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Dec 16 12:11:54 old-k8s-version-933974 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 16 12:11:54 old-k8s-version-933974 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 12:11:55 old-k8s-version-933974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Dec 16 12:11:55 old-k8s-version-933974 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 16 12:11:55 old-k8s-version-933974 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 16 12:11:55 old-k8s-version-933974 kubelet[6503]: I1216 12:11:55.194066    6503 server.go:416] Version: v1.20.0
	Dec 16 12:11:55 old-k8s-version-933974 kubelet[6503]: I1216 12:11:55.194317    6503 server.go:837] Client rotation is on, will bootstrap in background
	Dec 16 12:11:55 old-k8s-version-933974 kubelet[6503]: I1216 12:11:55.196159    6503 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 16 12:11:55 old-k8s-version-933974 kubelet[6503]: I1216 12:11:55.197198    6503 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Dec 16 12:11:55 old-k8s-version-933974 kubelet[6503]: W1216 12:11:55.197304    6503 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-933974 -n old-k8s-version-933974
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 2 (234.789193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-933974" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (341.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:12:12.186942  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:13:29.962952  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:13:48.522575  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:13:50.823367  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/bridge-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:14:52.223218  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:14:56.030056  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/no-preload-181484/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:15:26.955434  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:15:37.404294  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/default-k8s-diff-port-935544/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:16:06.843903  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:16:19.092910  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/no-preload-181484/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:16:35.653748  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:16:55.824061  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:17:00.469164  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/default-k8s-diff-port-935544/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E1216 12:17:12.186768  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-933974 -n old-k8s-version-933974
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 2 (236.722924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-933974" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-933974 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-933974 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.915µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-933974 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 2 (237.133383ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-933974 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-987169 image list                          | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| delete  | -p embed-certs-987169                                  | embed-certs-987169           | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| start   | -p newest-cni-409154 --memory=2200 --alsologtostderr   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-935544                           | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-935544 | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | default-k8s-diff-port-935544                           |                              |         |         |                     |                     |
	| image   | no-preload-181484 image list                           | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| delete  | -p no-preload-181484                                   | no-preload-181484            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	| addons  | enable metrics-server -p newest-cni-409154             | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:57 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:57 UTC | 16 Dec 24 11:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-409154                  | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-409154 --memory=2200 --alsologtostderr   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| image   | newest-cni-409154 image list                           | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	| delete  | -p newest-cni-409154                                   | newest-cni-409154            | jenkins | v1.34.0 | 16 Dec 24 11:58 UTC | 16 Dec 24 11:58 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:58:10
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:58:10.457214  279095 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:58:10.457320  279095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:58:10.457328  279095 out.go:358] Setting ErrFile to fd 2...
	I1216 11:58:10.457332  279095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:58:10.457523  279095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:58:10.458091  279095 out.go:352] Setting JSON to false
	I1216 11:58:10.459068  279095 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13237,"bootTime":1734337053,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 11:58:10.459136  279095 start.go:139] virtualization: kvm guest
	I1216 11:58:10.461398  279095 out.go:177] * [newest-cni-409154] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 11:58:10.462722  279095 notify.go:220] Checking for updates...
	I1216 11:58:10.462776  279095 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:58:10.464205  279095 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:58:10.465623  279095 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:58:10.466987  279095 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:58:10.468240  279095 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 11:58:10.469465  279095 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:58:10.470955  279095 config.go:182] Loaded profile config "newest-cni-409154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:58:10.471351  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:10.471415  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:10.486592  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
	I1216 11:58:10.487085  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:10.487663  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:10.487693  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:10.488179  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:10.488439  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:10.488761  279095 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:58:10.489224  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:10.489296  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:10.505146  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I1216 11:58:10.505678  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:10.506233  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:10.506264  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:10.506714  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:10.506902  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:10.544395  279095 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 11:58:10.545779  279095 start.go:297] selected driver: kvm2
	I1216 11:58:10.545792  279095 start.go:901] validating driver "kvm2" against &{Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:58:10.545905  279095 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:58:10.546668  279095 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:58:10.546758  279095 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 11:58:10.563076  279095 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 11:58:10.563675  279095 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 11:58:10.563714  279095 cni.go:84] Creating CNI manager for ""
	I1216 11:58:10.563781  279095 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:58:10.563837  279095 start.go:340] cluster config:
	{Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-409154 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeR
equested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:58:10.564033  279095 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:58:10.565811  279095 out.go:177] * Starting "newest-cni-409154" primary control-plane node in "newest-cni-409154" cluster
	I1216 11:58:10.567051  279095 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:58:10.567086  279095 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1216 11:58:10.567099  279095 cache.go:56] Caching tarball of preloaded images
	I1216 11:58:10.567176  279095 preload.go:172] Found /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 11:58:10.567186  279095 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1216 11:58:10.567281  279095 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/config.json ...
	I1216 11:58:10.567464  279095 start.go:360] acquireMachinesLock for newest-cni-409154: {Name:mk416695f38f04563a2ca155c5383ee4d8f0f97b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 11:58:10.567508  279095 start.go:364] duration metric: took 24.753µs to acquireMachinesLock for "newest-cni-409154"
	I1216 11:58:10.567522  279095 start.go:96] Skipping create...Using existing machine configuration
	I1216 11:58:10.567530  279095 fix.go:54] fixHost starting: 
	I1216 11:58:10.567819  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:10.567855  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:10.582641  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46159
	I1216 11:58:10.583122  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:10.583779  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:10.583807  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:10.584109  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:10.584302  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:10.584447  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:10.585895  279095 fix.go:112] recreateIfNeeded on newest-cni-409154: state=Stopped err=<nil>
	I1216 11:58:10.585928  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	W1216 11:58:10.586110  279095 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 11:58:10.587967  279095 out.go:177] * Restarting existing kvm2 VM for "newest-cni-409154" ...
	I1216 11:58:08.692849  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:08.705140  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:08.705206  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:08.745953  276553 cri.go:89] found id: ""
	I1216 11:58:08.745985  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.745994  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:08.746001  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:08.746053  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:08.777650  276553 cri.go:89] found id: ""
	I1216 11:58:08.777678  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.777686  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:08.777692  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:08.777753  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:08.810501  276553 cri.go:89] found id: ""
	I1216 11:58:08.810530  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.810541  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:08.810547  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:08.810602  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:08.843082  276553 cri.go:89] found id: ""
	I1216 11:58:08.843111  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.843120  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:08.843126  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:08.843175  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:08.875195  276553 cri.go:89] found id: ""
	I1216 11:58:08.875223  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.875232  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:08.875238  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:08.875308  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:08.907296  276553 cri.go:89] found id: ""
	I1216 11:58:08.907334  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.907346  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:08.907354  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:08.907409  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:08.939491  276553 cri.go:89] found id: ""
	I1216 11:58:08.939525  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.939537  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:08.939544  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:08.939607  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:08.970370  276553 cri.go:89] found id: ""
	I1216 11:58:08.970407  276553 logs.go:282] 0 containers: []
	W1216 11:58:08.970420  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:08.970434  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:08.970452  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:08.983347  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:08.983393  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:09.057735  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:09.057765  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:09.057784  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:09.136549  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:09.136588  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:09.186771  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:09.186811  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:11.756641  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:11.776517  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:11.776588  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:11.813876  276553 cri.go:89] found id: ""
	I1216 11:58:11.813912  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.813925  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:11.813933  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:11.814000  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:11.850775  276553 cri.go:89] found id: ""
	I1216 11:58:11.850813  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.850825  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:11.850835  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:11.850894  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:11.881886  276553 cri.go:89] found id: ""
	I1216 11:58:11.881920  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.881933  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:11.881942  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:11.882008  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:11.913165  276553 cri.go:89] found id: ""
	I1216 11:58:11.913196  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.913209  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:11.913217  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:11.913279  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:11.945192  276553 cri.go:89] found id: ""
	I1216 11:58:11.945220  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.945231  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:11.945239  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:11.945297  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:11.977631  276553 cri.go:89] found id: ""
	I1216 11:58:11.977661  276553 logs.go:282] 0 containers: []
	W1216 11:58:11.977673  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:11.977682  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:11.977755  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:12.009497  276553 cri.go:89] found id: ""
	I1216 11:58:12.009527  276553 logs.go:282] 0 containers: []
	W1216 11:58:12.009536  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:12.009546  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:12.009610  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:12.045501  276553 cri.go:89] found id: ""
	I1216 11:58:12.045524  276553 logs.go:282] 0 containers: []
	W1216 11:58:12.045534  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:12.045547  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:12.045564  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:12.114030  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:12.114057  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:12.114073  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:12.188314  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:12.188356  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:12.224600  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:12.224632  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:12.277641  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:12.277681  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:10.589206  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Start
	I1216 11:58:10.589378  279095 main.go:141] libmachine: (newest-cni-409154) starting domain...
	I1216 11:58:10.589402  279095 main.go:141] libmachine: (newest-cni-409154) ensuring networks are active...
	I1216 11:58:10.590045  279095 main.go:141] libmachine: (newest-cni-409154) Ensuring network default is active
	I1216 11:58:10.590345  279095 main.go:141] libmachine: (newest-cni-409154) Ensuring network mk-newest-cni-409154 is active
	I1216 11:58:10.590691  279095 main.go:141] libmachine: (newest-cni-409154) getting domain XML...
	I1216 11:58:10.591328  279095 main.go:141] libmachine: (newest-cni-409154) creating domain...
	I1216 11:58:11.793966  279095 main.go:141] libmachine: (newest-cni-409154) waiting for IP...
	I1216 11:58:11.795095  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:11.795603  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:11.795695  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:11.795591  279132 retry.go:31] will retry after 244.170622ms: waiting for domain to come up
	I1216 11:58:12.041392  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:12.042035  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:12.042065  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:12.042003  279132 retry.go:31] will retry after 378.076417ms: waiting for domain to come up
	I1216 11:58:12.421749  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:12.422240  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:12.422267  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:12.422216  279132 retry.go:31] will retry after 370.938245ms: waiting for domain to come up
	I1216 11:58:12.794930  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:12.795410  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:12.795430  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:12.795372  279132 retry.go:31] will retry after 380.56228ms: waiting for domain to come up
	I1216 11:58:13.177977  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:13.178564  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:13.178597  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:13.178500  279132 retry.go:31] will retry after 582.330697ms: waiting for domain to come up
	I1216 11:58:13.762033  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:13.762664  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:13.762701  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:13.762593  279132 retry.go:31] will retry after 600.533428ms: waiting for domain to come up
	I1216 11:58:14.364374  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:14.364791  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:14.364828  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:14.364752  279132 retry.go:31] will retry after 773.596823ms: waiting for domain to come up
	I1216 11:58:15.139784  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:15.140270  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:15.140300  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:15.140224  279132 retry.go:31] will retry after 1.264403571s: waiting for domain to come up
	I1216 11:58:14.791934  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:14.805168  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:14.805255  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:14.837804  276553 cri.go:89] found id: ""
	I1216 11:58:14.837834  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.837898  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:14.837911  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:14.837976  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:14.871140  276553 cri.go:89] found id: ""
	I1216 11:58:14.871171  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.871183  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:14.871191  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:14.871254  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:14.903081  276553 cri.go:89] found id: ""
	I1216 11:58:14.903118  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.903127  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:14.903133  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:14.903196  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:14.942599  276553 cri.go:89] found id: ""
	I1216 11:58:14.942637  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.942650  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:14.942658  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:14.942723  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:14.981765  276553 cri.go:89] found id: ""
	I1216 11:58:14.981797  276553 logs.go:282] 0 containers: []
	W1216 11:58:14.981809  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:14.981816  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:14.981878  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:15.020936  276553 cri.go:89] found id: ""
	I1216 11:58:15.020977  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.020987  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:15.020993  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:15.021052  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:15.053954  276553 cri.go:89] found id: ""
	I1216 11:58:15.053995  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.054008  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:15.054016  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:15.054081  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:15.088792  276553 cri.go:89] found id: ""
	I1216 11:58:15.088828  276553 logs.go:282] 0 containers: []
	W1216 11:58:15.088839  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:15.088852  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:15.088867  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:15.143836  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:15.143873  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:15.162594  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:15.162637  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:15.252534  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:15.252562  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:15.252578  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:15.337849  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:15.337892  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:17.880680  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:17.893716  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:17.893807  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:17.928342  276553 cri.go:89] found id: ""
	I1216 11:58:17.928379  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.928394  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:17.928402  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:17.928468  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:17.964564  276553 cri.go:89] found id: ""
	I1216 11:58:17.964609  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.964618  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:17.964624  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:17.964677  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:16.406244  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:16.406755  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:16.406782  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:16.406707  279132 retry.go:31] will retry after 1.148140994s: waiting for domain to come up
	I1216 11:58:17.557073  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:17.557603  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:17.557625  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:17.557562  279132 retry.go:31] will retry after 1.49928484s: waiting for domain to come up
	I1216 11:58:19.058022  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:19.058469  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:19.058493  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:19.058429  279132 retry.go:31] will retry after 1.785857688s: waiting for domain to come up
	I1216 11:58:17.999903  276553 cri.go:89] found id: ""
	I1216 11:58:17.999937  276553 logs.go:282] 0 containers: []
	W1216 11:58:17.999946  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:17.999952  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:18.000011  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:18.042198  276553 cri.go:89] found id: ""
	I1216 11:58:18.042230  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.042243  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:18.042250  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:18.042314  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:18.078020  276553 cri.go:89] found id: ""
	I1216 11:58:18.078056  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.078070  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:18.078080  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:18.078154  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:18.111353  276553 cri.go:89] found id: ""
	I1216 11:58:18.111392  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.111404  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:18.111412  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:18.111485  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:18.147126  276553 cri.go:89] found id: ""
	I1216 11:58:18.147161  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.147172  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:18.147178  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:18.147245  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:18.181924  276553 cri.go:89] found id: ""
	I1216 11:58:18.181962  276553 logs.go:282] 0 containers: []
	W1216 11:58:18.181974  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:18.181989  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:18.182007  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:18.235545  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:18.235588  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:18.251579  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:18.251610  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:18.316207  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:18.316238  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:18.316255  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:18.389630  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:18.389677  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:20.929592  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:20.944290  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:20.944382  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:20.991069  276553 cri.go:89] found id: ""
	I1216 11:58:20.991107  276553 logs.go:282] 0 containers: []
	W1216 11:58:20.991118  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:20.991126  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:20.991191  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:21.033257  276553 cri.go:89] found id: ""
	I1216 11:58:21.033291  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.033304  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:21.033311  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:21.033397  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:21.068318  276553 cri.go:89] found id: ""
	I1216 11:58:21.068357  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.068370  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:21.068378  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:21.068449  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:21.100812  276553 cri.go:89] found id: ""
	I1216 11:58:21.100847  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.100860  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:21.100867  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:21.100943  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:21.136004  276553 cri.go:89] found id: ""
	I1216 11:58:21.136037  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.136048  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:21.136054  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:21.136121  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:21.172785  276553 cri.go:89] found id: ""
	I1216 11:58:21.172825  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.172836  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:21.172842  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:21.172907  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:21.207325  276553 cri.go:89] found id: ""
	I1216 11:58:21.207381  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.207402  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:21.207413  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:21.207480  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:21.242438  276553 cri.go:89] found id: ""
	I1216 11:58:21.242479  276553 logs.go:282] 0 containers: []
	W1216 11:58:21.242493  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:21.242508  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:21.242526  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:21.283025  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:21.283069  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:21.335930  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:21.335979  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:21.349370  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:21.349403  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:21.427874  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:21.427914  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:21.427932  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:20.846031  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:20.846581  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:20.846631  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:20.846572  279132 retry.go:31] will retry after 2.9103898s: waiting for domain to come up
	I1216 11:58:23.760767  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:23.761253  279095 main.go:141] libmachine: (newest-cni-409154) DBG | unable to find current IP address of domain newest-cni-409154 in network mk-newest-cni-409154
	I1216 11:58:23.761287  279095 main.go:141] libmachine: (newest-cni-409154) DBG | I1216 11:58:23.761188  279132 retry.go:31] will retry after 3.698063043s: waiting for domain to come up
	I1216 11:58:24.015947  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:24.028721  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:24.028787  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:24.061707  276553 cri.go:89] found id: ""
	I1216 11:58:24.061736  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.061745  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:24.061751  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:24.061803  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:24.095657  276553 cri.go:89] found id: ""
	I1216 11:58:24.095687  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.095696  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:24.095702  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:24.095752  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:24.128755  276553 cri.go:89] found id: ""
	I1216 11:58:24.128784  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.128793  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:24.128799  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:24.128847  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:24.162145  276553 cri.go:89] found id: ""
	I1216 11:58:24.162180  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.162189  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:24.162194  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:24.162248  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:24.194650  276553 cri.go:89] found id: ""
	I1216 11:58:24.194689  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.194702  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:24.194709  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:24.194784  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:24.226091  276553 cri.go:89] found id: ""
	I1216 11:58:24.226127  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.226139  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:24.226147  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:24.226207  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:24.258140  276553 cri.go:89] found id: ""
	I1216 11:58:24.258184  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.258194  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:24.258200  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:24.258254  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:24.289916  276553 cri.go:89] found id: ""
	I1216 11:58:24.289948  276553 logs.go:282] 0 containers: []
	W1216 11:58:24.289957  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:24.289969  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:24.289982  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:24.338070  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:24.338118  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:24.351201  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:24.351242  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:24.422998  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:24.423027  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:24.423039  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:24.499059  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:24.499113  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:27.036987  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:27.049417  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:27.049505  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:27.080923  276553 cri.go:89] found id: ""
	I1216 11:58:27.080951  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.080971  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:27.080980  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:27.081037  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:27.111686  276553 cri.go:89] found id: ""
	I1216 11:58:27.111717  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.111725  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:27.111731  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:27.111781  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:27.142935  276553 cri.go:89] found id: ""
	I1216 11:58:27.142966  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.142976  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:27.142984  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:27.143048  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:27.176277  276553 cri.go:89] found id: ""
	I1216 11:58:27.176309  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.176320  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:27.176326  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:27.176399  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:27.206698  276553 cri.go:89] found id: ""
	I1216 11:58:27.206733  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.206744  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:27.206752  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:27.206816  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:27.238188  276553 cri.go:89] found id: ""
	I1216 11:58:27.238225  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.238245  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:27.238253  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:27.238319  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:27.269646  276553 cri.go:89] found id: ""
	I1216 11:58:27.269678  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.269690  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:27.269697  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:27.269764  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:27.304992  276553 cri.go:89] found id: ""
	I1216 11:58:27.305022  276553 logs.go:282] 0 containers: []
	W1216 11:58:27.305032  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:27.305042  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:27.305057  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:27.379755  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:27.379798  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:27.415958  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:27.415998  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:27.468345  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:27.468378  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:27.482879  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:27.482910  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:27.551153  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:27.461758  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.462297  279095 main.go:141] libmachine: (newest-cni-409154) found domain IP: 192.168.39.202
	I1216 11:58:27.462330  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has current primary IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.462345  279095 main.go:141] libmachine: (newest-cni-409154) reserving static IP address...
	I1216 11:58:27.462706  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "newest-cni-409154", mac: "52:54:00:23:fb:52", ip: "192.168.39.202"} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.462733  279095 main.go:141] libmachine: (newest-cni-409154) DBG | skip adding static IP to network mk-newest-cni-409154 - found existing host DHCP lease matching {name: "newest-cni-409154", mac: "52:54:00:23:fb:52", ip: "192.168.39.202"}
	I1216 11:58:27.462751  279095 main.go:141] libmachine: (newest-cni-409154) reserved static IP address 192.168.39.202 for domain newest-cni-409154
	I1216 11:58:27.462761  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Getting to WaitForSSH function...
	I1216 11:58:27.462769  279095 main.go:141] libmachine: (newest-cni-409154) waiting for SSH...
	I1216 11:58:27.464970  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.465299  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.465323  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.465446  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Using SSH client type: external
	I1216 11:58:27.465486  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Using SSH private key: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa (-rw-------)
	I1216 11:58:27.465535  279095 main.go:141] libmachine: (newest-cni-409154) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1216 11:58:27.465568  279095 main.go:141] libmachine: (newest-cni-409154) DBG | About to run SSH command:
	I1216 11:58:27.465586  279095 main.go:141] libmachine: (newest-cni-409154) DBG | exit 0
	I1216 11:58:27.589004  279095 main.go:141] libmachine: (newest-cni-409154) DBG | SSH cmd err, output: <nil>: 
	I1216 11:58:27.589479  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetConfigRaw
	I1216 11:58:27.590146  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:27.592843  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.593292  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.593326  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.593571  279095 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/config.json ...
	I1216 11:58:27.593797  279095 machine.go:93] provisionDockerMachine start ...
	I1216 11:58:27.593817  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:27.594055  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.597195  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.597567  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.597598  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.597715  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:27.597907  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.598105  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.598253  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:27.598462  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:27.598720  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:27.598734  279095 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 11:58:27.697242  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 11:58:27.697284  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetMachineName
	I1216 11:58:27.697579  279095 buildroot.go:166] provisioning hostname "newest-cni-409154"
	I1216 11:58:27.697618  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetMachineName
	I1216 11:58:27.697818  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.700788  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.701199  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.701231  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.701465  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:27.701659  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.701868  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.701996  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:27.702154  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:27.702385  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:27.702412  279095 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-409154 && echo "newest-cni-409154" | sudo tee /etc/hostname
	I1216 11:58:27.810794  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-409154
	
	I1216 11:58:27.810827  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.813678  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.814176  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.814219  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.814350  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:27.814559  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.814706  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:27.814856  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:27.815025  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:27.815211  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:27.815227  279095 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-409154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-409154/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-409154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 11:58:27.921763  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:58:27.921799  279095 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20107-210204/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-210204/.minikube}
	I1216 11:58:27.921855  279095 buildroot.go:174] setting up certificates
	I1216 11:58:27.921869  279095 provision.go:84] configureAuth start
	I1216 11:58:27.921885  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetMachineName
	I1216 11:58:27.922180  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:27.924925  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.925273  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.925305  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.925452  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:27.927662  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.927976  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:27.928006  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:27.928167  279095 provision.go:143] copyHostCerts
	I1216 11:58:27.928234  279095 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem, removing ...
	I1216 11:58:27.928247  279095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem
	I1216 11:58:27.928329  279095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/ca.pem (1078 bytes)
	I1216 11:58:27.928444  279095 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem, removing ...
	I1216 11:58:27.928456  279095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem
	I1216 11:58:27.928491  279095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/cert.pem (1123 bytes)
	I1216 11:58:27.928836  279095 exec_runner.go:144] found /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem, removing ...
	I1216 11:58:27.928995  279095 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem
	I1216 11:58:27.929057  279095 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-210204/.minikube/key.pem (1679 bytes)
	I1216 11:58:27.929198  279095 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem org=jenkins.newest-cni-409154 san=[127.0.0.1 192.168.39.202 localhost minikube newest-cni-409154]
	I1216 11:58:28.119927  279095 provision.go:177] copyRemoteCerts
	I1216 11:58:28.119993  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 11:58:28.120033  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.122642  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.122863  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.122888  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.123099  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.123312  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.123510  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.123639  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.203158  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 11:58:28.230017  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1216 11:58:28.255874  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 11:58:28.281032  279095 provision.go:87] duration metric: took 359.143013ms to configureAuth
	I1216 11:58:28.281064  279095 buildroot.go:189] setting minikube options for container-runtime
	I1216 11:58:28.281272  279095 config.go:182] Loaded profile config "newest-cni-409154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:58:28.281381  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.283867  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.284173  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.284205  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.284362  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.284586  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.284761  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.284907  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.285075  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:28.285289  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:28.285311  279095 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 11:58:28.493363  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 11:58:28.493395  279095 machine.go:96] duration metric: took 899.585204ms to provisionDockerMachine
	I1216 11:58:28.493418  279095 start.go:293] postStartSetup for "newest-cni-409154" (driver="kvm2")
	I1216 11:58:28.493435  279095 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 11:58:28.493464  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.493804  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 11:58:28.493837  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.496887  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.497252  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.497305  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.497551  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.497781  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.497974  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.498122  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.575105  279095 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 11:58:28.579116  279095 info.go:137] Remote host: Buildroot 2023.02.9
	I1216 11:58:28.579146  279095 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/addons for local assets ...
	I1216 11:58:28.579210  279095 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-210204/.minikube/files for local assets ...
	I1216 11:58:28.579283  279095 filesync.go:149] local asset: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem -> 2175192.pem in /etc/ssl/certs
	I1216 11:58:28.579384  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 11:58:28.588438  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:58:28.611018  279095 start.go:296] duration metric: took 117.581046ms for postStartSetup
	I1216 11:58:28.611076  279095 fix.go:56] duration metric: took 18.043540567s for fixHost
	I1216 11:58:28.611100  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.614398  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.614793  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.614826  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.615084  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.615326  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.615523  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.615719  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.615908  279095 main.go:141] libmachine: Using SSH client type: native
	I1216 11:58:28.616090  279095 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I1216 11:58:28.616105  279095 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1216 11:58:28.717339  279095 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734350308.689557050
	
	I1216 11:58:28.717371  279095 fix.go:216] guest clock: 1734350308.689557050
	I1216 11:58:28.717382  279095 fix.go:229] Guest: 2024-12-16 11:58:28.68955705 +0000 UTC Remote: 2024-12-16 11:58:28.611080616 +0000 UTC m=+18.193007687 (delta=78.476434ms)
	I1216 11:58:28.717413  279095 fix.go:200] guest clock delta is within tolerance: 78.476434ms
	I1216 11:58:28.717419  279095 start.go:83] releasing machines lock for "newest-cni-409154", held for 18.149901468s
	I1216 11:58:28.717440  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.717739  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:28.720755  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.721190  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.721220  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.721383  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.721877  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.722040  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:28.722130  279095 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 11:58:28.722179  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.722312  279095 ssh_runner.go:195] Run: cat /version.json
	I1216 11:58:28.722337  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:28.724752  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725087  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.725113  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725133  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725285  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.725472  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.725600  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:28.725623  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:28.725634  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.725803  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:28.725790  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.725944  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:28.726118  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:28.726278  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:28.798361  279095 ssh_runner.go:195] Run: systemctl --version
	I1216 11:58:28.823281  279095 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 11:58:28.965469  279095 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 11:58:28.970957  279095 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 11:58:28.971032  279095 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:58:28.986070  279095 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 11:58:28.986095  279095 start.go:495] detecting cgroup driver to use...
	I1216 11:58:28.986168  279095 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 11:58:29.002166  279095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 11:58:29.015245  279095 docker.go:217] disabling cri-docker service (if available) ...
	I1216 11:58:29.015357  279095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 11:58:29.028270  279095 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 11:58:29.040809  279095 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 11:58:29.153768  279095 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 11:58:29.296765  279095 docker.go:233] disabling docker service ...
	I1216 11:58:29.296853  279095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 11:58:29.310642  279095 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 11:58:29.322968  279095 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 11:58:29.458651  279095 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 11:58:29.569319  279095 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 11:58:29.583488  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 11:58:29.602278  279095 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 11:58:29.602346  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.612191  279095 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 11:58:29.612256  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.621862  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.631438  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.641222  279095 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 11:58:29.652611  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.663073  279095 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.679545  279095 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:58:29.690214  279095 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 11:58:29.699851  279095 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 11:58:29.699926  279095 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 11:58:29.713189  279095 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 11:58:29.722840  279095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:58:29.848101  279095 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 11:58:29.935007  279095 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 11:58:29.935088  279095 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 11:58:29.939824  279095 start.go:563] Will wait 60s for crictl version
	I1216 11:58:29.939910  279095 ssh_runner.go:195] Run: which crictl
	I1216 11:58:29.943491  279095 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 11:58:29.980696  279095 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 11:58:29.980807  279095 ssh_runner.go:195] Run: crio --version
	I1216 11:58:30.009245  279095 ssh_runner.go:195] Run: crio --version
	I1216 11:58:30.038597  279095 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1216 11:58:30.040039  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetIP
	I1216 11:58:30.042931  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:30.043287  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:30.043320  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:30.043662  279095 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 11:58:30.047939  279095 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:58:30.062384  279095 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1216 11:58:30.063947  279095 kubeadm.go:883] updating cluster {Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 11:58:30.064099  279095 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:58:30.064174  279095 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:58:30.110756  279095 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1216 11:58:30.110842  279095 ssh_runner.go:195] Run: which lz4
	I1216 11:58:30.115974  279095 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 11:58:30.120455  279095 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 11:58:30.120505  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1216 11:58:30.052180  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:30.065848  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:30.065910  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:30.108387  276553 cri.go:89] found id: ""
	I1216 11:58:30.108418  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.108428  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:30.108436  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:30.108510  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:30.143956  276553 cri.go:89] found id: ""
	I1216 11:58:30.143997  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.144008  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:30.144014  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:30.144079  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:30.177213  276553 cri.go:89] found id: ""
	I1216 11:58:30.177250  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.177263  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:30.177272  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:30.177344  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:30.210808  276553 cri.go:89] found id: ""
	I1216 11:58:30.210846  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.210858  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:30.210867  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:30.210943  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:30.243895  276553 cri.go:89] found id: ""
	I1216 11:58:30.243935  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.243947  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:30.243955  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:30.244026  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:30.282295  276553 cri.go:89] found id: ""
	I1216 11:58:30.282335  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.282347  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:30.282355  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:30.282424  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:30.325096  276553 cri.go:89] found id: ""
	I1216 11:58:30.325127  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.325137  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:30.325146  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:30.325223  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:30.368651  276553 cri.go:89] found id: ""
	I1216 11:58:30.368688  276553 logs.go:282] 0 containers: []
	W1216 11:58:30.368702  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:30.368715  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:30.368732  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:30.429442  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:30.429481  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:30.447157  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:30.447197  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:30.525823  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:30.525851  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:30.525876  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:30.619321  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:30.619374  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:31.365838  279095 crio.go:462] duration metric: took 1.249888265s to copy over tarball
	I1216 11:58:31.365939  279095 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 11:58:33.464744  279095 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.098774355s)
	I1216 11:58:33.464768  279095 crio.go:469] duration metric: took 2.098894697s to extract the tarball
	I1216 11:58:33.464775  279095 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 11:58:33.502605  279095 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:58:33.552519  279095 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 11:58:33.552546  279095 cache_images.go:84] Images are preloaded, skipping loading
	I1216 11:58:33.552564  279095 kubeadm.go:934] updating node { 192.168.39.202 8443 v1.31.2 crio true true} ...
	I1216 11:58:33.552695  279095 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-409154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 11:58:33.552789  279095 ssh_runner.go:195] Run: crio config
	I1216 11:58:33.599280  279095 cni.go:84] Creating CNI manager for ""
	I1216 11:58:33.599316  279095 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:58:33.599330  279095 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1216 11:58:33.599369  279095 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.202 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-409154 NodeName:newest-cni-409154 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 11:58:33.599559  279095 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-409154"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.202"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.202"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 11:58:33.599635  279095 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1216 11:58:33.611454  279095 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 11:58:33.611560  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 11:58:33.620442  279095 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1216 11:58:33.636061  279095 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 11:58:33.651452  279095 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1216 11:58:33.667434  279095 ssh_runner.go:195] Run: grep 192.168.39.202	control-plane.minikube.internal$ /etc/hosts
	I1216 11:58:33.672022  279095 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:58:33.688407  279095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:58:33.825530  279095 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:58:33.842084  279095 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154 for IP: 192.168.39.202
	I1216 11:58:33.842119  279095 certs.go:194] generating shared ca certs ...
	I1216 11:58:33.842143  279095 certs.go:226] acquiring lock for ca certs: {Name:mkd97e0b0fea726104cb674caf81491f1144961d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:33.842348  279095 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key
	I1216 11:58:33.842417  279095 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key
	I1216 11:58:33.842433  279095 certs.go:256] generating profile certs ...
	I1216 11:58:33.842546  279095 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/client.key
	I1216 11:58:33.842651  279095 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/apiserver.key.4b1f7a67
	I1216 11:58:33.842714  279095 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/proxy-client.key
	I1216 11:58:33.842887  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem (1338 bytes)
	W1216 11:58:33.842940  279095 certs.go:480] ignoring /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519_empty.pem, impossibly tiny 0 bytes
	I1216 11:58:33.842954  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca-key.pem (1675 bytes)
	I1216 11:58:33.842995  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/ca.pem (1078 bytes)
	I1216 11:58:33.843034  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/cert.pem (1123 bytes)
	I1216 11:58:33.843080  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/certs/key.pem (1679 bytes)
	I1216 11:58:33.843153  279095 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem (1708 bytes)
	I1216 11:58:33.843887  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 11:58:33.888237  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1216 11:58:33.922983  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 11:58:33.947106  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 11:58:33.979827  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1216 11:58:34.006341  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 11:58:34.029912  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 11:58:34.052408  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/newest-cni-409154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1216 11:58:34.074341  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 11:58:34.096314  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/certs/217519.pem --> /usr/share/ca-certificates/217519.pem (1338 bytes)
	I1216 11:58:34.117813  279095 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/ssl/certs/2175192.pem --> /usr/share/ca-certificates/2175192.pem (1708 bytes)
	I1216 11:58:34.139265  279095 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 11:58:34.154749  279095 ssh_runner.go:195] Run: openssl version
	I1216 11:58:34.160150  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 11:58:34.170031  279095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:58:34.174128  279095 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 10:32 /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:58:34.174192  279095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:58:34.179382  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 11:58:34.189755  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/217519.pem && ln -fs /usr/share/ca-certificates/217519.pem /etc/ssl/certs/217519.pem"
	I1216 11:58:34.200079  279095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/217519.pem
	I1216 11:58:34.204422  279095 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 10:42 /usr/share/ca-certificates/217519.pem
	I1216 11:58:34.204483  279095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/217519.pem
	I1216 11:58:34.210007  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/217519.pem /etc/ssl/certs/51391683.0"
	I1216 11:58:34.219577  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2175192.pem && ln -fs /usr/share/ca-certificates/2175192.pem /etc/ssl/certs/2175192.pem"
	I1216 11:58:34.229612  279095 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2175192.pem
	I1216 11:58:34.233804  279095 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 10:42 /usr/share/ca-certificates/2175192.pem
	I1216 11:58:34.233855  279095 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2175192.pem
	I1216 11:58:34.239357  279095 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2175192.pem /etc/ssl/certs/3ec20f2e.0"
	I1216 11:58:34.249593  279095 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 11:58:34.253857  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 11:58:34.259667  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 11:58:34.265350  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 11:58:34.271063  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 11:58:34.276571  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 11:58:34.282052  279095 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 11:58:34.287542  279095 kubeadm.go:392] StartCluster: {Name:newest-cni-409154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-409154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil
> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:58:34.287635  279095 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 11:58:34.287698  279095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:58:34.330701  279095 cri.go:89] found id: ""
	I1216 11:58:34.330766  279095 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 11:58:34.340500  279095 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1216 11:58:34.340523  279095 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1216 11:58:34.340563  279095 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 11:58:34.351292  279095 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 11:58:34.351877  279095 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-409154" does not appear in /home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:58:34.352074  279095 kubeconfig.go:62] /home/jenkins/minikube-integration/20107-210204/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-409154" cluster setting kubeconfig missing "newest-cni-409154" context setting]
	I1216 11:58:34.352501  279095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/kubeconfig: {Name:mk57b3cf865454cf7d68479ab0b998d52a5c91e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:34.353808  279095 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 11:58:34.363101  279095 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.202
	I1216 11:58:34.363144  279095 kubeadm.go:1160] stopping kube-system containers ...
	I1216 11:58:34.363157  279095 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 11:58:34.363210  279095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:58:34.397341  279095 cri.go:89] found id: ""
	I1216 11:58:34.397410  279095 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 11:58:34.412614  279095 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:58:34.421801  279095 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:58:34.421830  279095 kubeadm.go:157] found existing configuration files:
	
	I1216 11:58:34.421890  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:58:34.430246  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:58:34.430309  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:58:34.438808  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:58:34.447241  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:58:34.447315  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:58:34.456064  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:58:34.464112  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:58:34.464179  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:58:34.472719  279095 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:58:34.481088  279095 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:58:34.481162  279095 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:58:34.489902  279095 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:58:34.499478  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:34.600562  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:33.167369  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:33.180007  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:33.180135  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:33.216102  276553 cri.go:89] found id: ""
	I1216 11:58:33.216139  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.216149  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:33.216156  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:33.216219  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:33.264290  276553 cri.go:89] found id: ""
	I1216 11:58:33.264331  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.264351  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:33.264360  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:33.264428  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:33.307400  276553 cri.go:89] found id: ""
	I1216 11:58:33.307440  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.307452  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:33.307461  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:33.307528  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:33.348555  276553 cri.go:89] found id: ""
	I1216 11:58:33.348597  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.348610  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:33.348619  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:33.348688  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:33.385255  276553 cri.go:89] found id: ""
	I1216 11:58:33.385286  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.385296  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:33.385303  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:33.385366  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:33.422656  276553 cri.go:89] found id: ""
	I1216 11:58:33.422701  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.422713  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:33.422722  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:33.422783  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:33.461547  276553 cri.go:89] found id: ""
	I1216 11:58:33.461582  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.461591  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:33.461601  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:33.461651  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:33.496893  276553 cri.go:89] found id: ""
	I1216 11:58:33.496935  276553 logs.go:282] 0 containers: []
	W1216 11:58:33.496948  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:33.496987  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:33.497003  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:33.510577  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:33.510609  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:33.579037  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:33.579064  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:33.579080  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:33.657142  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:33.657178  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:33.703963  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:33.703993  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:36.255123  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:36.269198  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:36.269265  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:36.302149  276553 cri.go:89] found id: ""
	I1216 11:58:36.302189  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.302202  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:36.302210  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:36.302278  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:36.334332  276553 cri.go:89] found id: ""
	I1216 11:58:36.334367  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.334378  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:36.334386  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:36.334478  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:36.367219  276553 cri.go:89] found id: ""
	I1216 11:58:36.367251  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.367262  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:36.367271  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:36.367346  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:36.409111  276553 cri.go:89] found id: ""
	I1216 11:58:36.409142  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.409154  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:36.409162  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:36.409235  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:36.453572  276553 cri.go:89] found id: ""
	I1216 11:58:36.453612  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.453624  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:36.453639  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:36.453713  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:36.498382  276553 cri.go:89] found id: ""
	I1216 11:58:36.498420  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.498430  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:36.498445  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:36.498516  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:36.533177  276553 cri.go:89] found id: ""
	I1216 11:58:36.533213  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.533225  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:36.533234  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:36.533315  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:36.568180  276553 cri.go:89] found id: ""
	I1216 11:58:36.568219  276553 logs.go:282] 0 containers: []
	W1216 11:58:36.568232  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:36.568247  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:36.568263  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:36.631684  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:36.631732  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:36.646177  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:36.646219  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:36.715265  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:36.715298  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:36.715360  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:36.795141  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:36.795187  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:35.572311  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:35.786524  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:35.872020  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:35.964712  279095 api_server.go:52] waiting for apiserver process to appear ...
	I1216 11:58:35.964813  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:36.465153  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:36.965020  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:37.465530  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:37.965157  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:38.465454  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:38.479784  279095 api_server.go:72] duration metric: took 2.515071544s to wait for apiserver process to appear ...
	I1216 11:58:38.479821  279095 api_server.go:88] waiting for apiserver healthz status ...
	I1216 11:58:38.479849  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.266917  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 11:58:40.266944  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 11:58:40.266957  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.277079  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 11:58:40.277107  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 11:58:40.480677  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.486236  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:58:40.486263  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:58:40.979982  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:40.987028  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:58:40.987054  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:58:41.480764  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:41.487009  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 11:58:41.487037  279095 api_server.go:103] status: https://192.168.39.202:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 11:58:41.980637  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:41.985077  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1216 11:58:41.991955  279095 api_server.go:141] control plane version: v1.31.2
	I1216 11:58:41.991987  279095 api_server.go:131] duration metric: took 3.512159263s to wait for apiserver health ...
	I1216 11:58:41.991997  279095 cni.go:84] Creating CNI manager for ""
	I1216 11:58:41.992003  279095 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 11:58:41.993731  279095 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 11:58:41.994974  279095 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 11:58:42.005415  279095 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 11:58:42.022839  279095 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 11:58:42.033438  279095 system_pods.go:59] 8 kube-system pods found
	I1216 11:58:42.033476  279095 system_pods.go:61] "coredns-7c65d6cfc9-cvvpl" [2962f6fc-40be-45f2-9096-aadecacdc0b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 11:58:42.033486  279095 system_pods.go:61] "etcd-newest-cni-409154" [de9adbc9-3837-4367-b60c-0a28b5675107] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 11:58:42.033499  279095 system_pods.go:61] "kube-apiserver-newest-cni-409154" [62b9084c-0af2-4592-824c-8769ed71635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 11:58:42.033508  279095 system_pods.go:61] "kube-controller-manager-newest-cni-409154" [bde6fb21-32c3-4da8-909b-20b8b64f6e36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 11:58:42.033521  279095 system_pods.go:61] "kube-proxy-nz6pz" [703e409a-1ced-4b28-babc-6b8d233b9fcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 11:58:42.033534  279095 system_pods.go:61] "kube-scheduler-newest-cni-409154" [850ced2f-6dae-40ef-ab2a-c5c66c5cd177] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 11:58:42.033551  279095 system_pods.go:61] "metrics-server-6867b74b74-9tg4t" [ba1ea4d4-8b10-41c3-944a-831cee9abc82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 11:58:42.033563  279095 system_pods.go:61] "storage-provisioner" [b3f6ed1a-bb74-4a9f-81f4-8ee598480fd9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 11:58:42.033575  279095 system_pods.go:74] duration metric: took 10.70808ms to wait for pod list to return data ...
	I1216 11:58:42.033585  279095 node_conditions.go:102] verifying NodePressure condition ...
	I1216 11:58:42.036820  279095 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 11:58:42.036844  279095 node_conditions.go:123] node cpu capacity is 2
	I1216 11:58:42.036875  279095 node_conditions.go:105] duration metric: took 3.281402ms to run NodePressure ...
	I1216 11:58:42.036900  279095 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 11:58:42.327663  279095 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 11:58:42.339587  279095 ops.go:34] apiserver oom_adj: -16
	I1216 11:58:42.339616  279095 kubeadm.go:597] duration metric: took 7.999086573s to restartPrimaryControlPlane
	I1216 11:58:42.339627  279095 kubeadm.go:394] duration metric: took 8.052090671s to StartCluster
	I1216 11:58:42.339674  279095 settings.go:142] acquiring lock: {Name:mk3e7a5ea729d05e36deedcd45fd7829ccafa72b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:42.339767  279095 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:58:42.340896  279095 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-210204/kubeconfig: {Name:mk57b3cf865454cf7d68479ab0b998d52a5c91e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:58:42.341317  279095 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.202 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 11:58:42.341358  279095 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 11:58:42.341468  279095 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-409154"
	I1216 11:58:42.341493  279095 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-409154"
	W1216 11:58:42.341502  279095 addons.go:243] addon storage-provisioner should already be in state true
	I1216 11:58:42.341525  279095 addons.go:69] Setting default-storageclass=true in profile "newest-cni-409154"
	I1216 11:58:42.341546  279095 addons.go:69] Setting dashboard=true in profile "newest-cni-409154"
	I1216 11:58:42.341534  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.341554  279095 config.go:182] Loaded profile config "newest-cni-409154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:58:42.341562  279095 addons.go:69] Setting metrics-server=true in profile "newest-cni-409154"
	I1216 11:58:42.341602  279095 addons.go:234] Setting addon metrics-server=true in "newest-cni-409154"
	W1216 11:58:42.341613  279095 addons.go:243] addon metrics-server should already be in state true
	I1216 11:58:42.341550  279095 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-409154"
	I1216 11:58:42.341668  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.341562  279095 addons.go:234] Setting addon dashboard=true in "newest-cni-409154"
	W1216 11:58:42.341766  279095 addons.go:243] addon dashboard should already be in state true
	I1216 11:58:42.341812  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.342033  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342055  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342066  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342065  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342085  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342107  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342207  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.342230  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.342910  279095 out.go:177] * Verifying Kubernetes components...
	I1216 11:58:42.344377  279095 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:58:42.358561  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34565
	I1216 11:58:42.359188  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.359817  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.359841  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.360254  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.360504  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.362469  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37203
	I1216 11:58:42.362503  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39083
	I1216 11:58:42.362558  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42579
	I1216 11:58:42.362857  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.363000  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.363324  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.363351  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.363627  279095 addons.go:234] Setting addon default-storageclass=true in "newest-cni-409154"
	W1216 11:58:42.363647  279095 addons.go:243] addon default-storageclass should already be in state true
	I1216 11:58:42.363681  279095 host.go:66] Checking if "newest-cni-409154" exists ...
	I1216 11:58:42.363730  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.363865  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.363890  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.363979  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.364019  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.364264  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.364300  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.364330  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.364468  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.364811  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.364857  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.365039  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.365061  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.365659  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.366150  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.366193  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.379564  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40203
	I1216 11:58:42.383427  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35491
	I1216 11:58:42.384214  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43403
	I1216 11:58:42.389453  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.389476  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.389687  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.389977  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.389995  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.390001  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.390016  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.390284  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.390308  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.390402  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.390497  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.390711  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.390766  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.390961  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.390969  279095 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:58:42.391003  279095 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:58:42.392531  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.393754  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.394494  279095 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1216 11:58:42.395267  279095 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1216 11:58:42.396422  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 11:58:42.396441  279095 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 11:58:42.396457  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.397661  279095 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1216 11:58:42.398785  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1216 11:58:42.398802  279095 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1216 11:58:42.398822  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.399817  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.400305  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.400328  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.400531  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.400690  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.400848  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.401130  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.402248  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.402677  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.402705  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.402899  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.403091  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.403235  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.403367  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.409172  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46029
	I1216 11:58:42.410026  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.410606  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.410625  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.410698  279095 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42479
	I1216 11:58:42.411056  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.411179  279095 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:58:42.411268  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.411636  279095 main.go:141] libmachine: Using API Version  1
	I1216 11:58:42.411653  279095 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:58:42.412245  279095 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:58:42.412420  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetState
	I1216 11:58:42.413415  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.413723  279095 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 11:58:42.413739  279095 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 11:58:42.413757  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.414236  279095 main.go:141] libmachine: (newest-cni-409154) Calling .DriverName
	I1216 11:58:42.415933  279095 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:58:39.333144  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:39.345528  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:39.345605  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:39.380984  276553 cri.go:89] found id: ""
	I1216 11:58:39.381022  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.381042  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:39.381050  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:39.381116  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:39.414143  276553 cri.go:89] found id: ""
	I1216 11:58:39.414179  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.414192  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:39.414200  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:39.414271  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:39.451080  276553 cri.go:89] found id: ""
	I1216 11:58:39.451113  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.451124  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:39.451133  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:39.451194  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:39.486555  276553 cri.go:89] found id: ""
	I1216 11:58:39.486585  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.486593  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:39.486599  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:39.486653  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:39.519626  276553 cri.go:89] found id: ""
	I1216 11:58:39.519663  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.519676  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:39.519683  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:39.519747  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:39.551678  276553 cri.go:89] found id: ""
	I1216 11:58:39.551717  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.551729  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:39.551736  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:39.551793  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:39.585498  276553 cri.go:89] found id: ""
	I1216 11:58:39.585536  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.585548  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:39.585556  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:39.585634  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:39.619904  276553 cri.go:89] found id: ""
	I1216 11:58:39.619941  276553 logs.go:282] 0 containers: []
	W1216 11:58:39.619952  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:39.619967  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:39.619989  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:39.698641  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:39.698673  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:39.698690  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:39.790153  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:39.790199  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:39.836401  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:39.836438  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:39.887171  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:39.887217  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:42.400773  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:42.424070  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:42.424127  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:42.467053  276553 cri.go:89] found id: ""
	I1216 11:58:42.467092  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.467103  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:42.467110  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:42.467171  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:42.510214  276553 cri.go:89] found id: ""
	I1216 11:58:42.510248  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.510260  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:42.510268  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:42.510328  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:42.553938  276553 cri.go:89] found id: ""
	I1216 11:58:42.553974  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.553986  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:42.553994  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:42.554058  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:42.595174  276553 cri.go:89] found id: ""
	I1216 11:58:42.595208  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.595220  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:42.595228  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:42.595293  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:42.631184  276553 cri.go:89] found id: ""
	I1216 11:58:42.631219  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.631231  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:42.631240  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:42.631300  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:42.665302  276553 cri.go:89] found id: ""
	I1216 11:58:42.665328  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.665338  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:42.665346  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:42.665396  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:42.702222  276553 cri.go:89] found id: ""
	I1216 11:58:42.702249  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.702257  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:42.702263  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:42.702311  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:42.735627  276553 cri.go:89] found id: ""
	I1216 11:58:42.735658  276553 logs.go:282] 0 containers: []
	W1216 11:58:42.735667  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:42.735676  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:42.735688  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:42.786111  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:42.786144  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:42.803378  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:42.803413  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:42.882160  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:42.882190  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:42.882207  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:42.969671  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:42.969707  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:42.416975  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.417169  279095 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:58:42.417184  279095 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 11:58:42.417201  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHHostname
	I1216 11:58:42.417684  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.417713  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.417898  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.418090  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.418227  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.418322  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.420259  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.420651  279095 main.go:141] libmachine: (newest-cni-409154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:fb:52", ip: ""} in network mk-newest-cni-409154: {Iface:virbr1 ExpiryTime:2024-12-16 12:58:21 +0000 UTC Type:0 Mac:52:54:00:23:fb:52 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:newest-cni-409154 Clientid:01:52:54:00:23:fb:52}
	I1216 11:58:42.420679  279095 main.go:141] libmachine: (newest-cni-409154) DBG | domain newest-cni-409154 has defined IP address 192.168.39.202 and MAC address 52:54:00:23:fb:52 in network mk-newest-cni-409154
	I1216 11:58:42.420818  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHPort
	I1216 11:58:42.420977  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHKeyPath
	I1216 11:58:42.421115  279095 main.go:141] libmachine: (newest-cni-409154) Calling .GetSSHUsername
	I1216 11:58:42.421227  279095 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/newest-cni-409154/id_rsa Username:docker}
	I1216 11:58:42.598988  279095 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:58:42.619954  279095 api_server.go:52] waiting for apiserver process to appear ...
	I1216 11:58:42.620059  279095 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:42.638426  279095 api_server.go:72] duration metric: took 297.04949ms to wait for apiserver process to appear ...
	I1216 11:58:42.638459  279095 api_server.go:88] waiting for apiserver healthz status ...
	I1216 11:58:42.638487  279095 api_server.go:253] Checking apiserver healthz at https://192.168.39.202:8443/healthz ...
	I1216 11:58:42.645697  279095 api_server.go:279] https://192.168.39.202:8443/healthz returned 200:
	ok
	I1216 11:58:42.647451  279095 api_server.go:141] control plane version: v1.31.2
	I1216 11:58:42.647484  279095 api_server.go:131] duration metric: took 9.015381ms to wait for apiserver health ...
	I1216 11:58:42.647495  279095 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 11:58:42.653389  279095 system_pods.go:59] 8 kube-system pods found
	I1216 11:58:42.653419  279095 system_pods.go:61] "coredns-7c65d6cfc9-cvvpl" [2962f6fc-40be-45f2-9096-aadecacdc0b7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 11:58:42.653427  279095 system_pods.go:61] "etcd-newest-cni-409154" [de9adbc9-3837-4367-b60c-0a28b5675107] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 11:58:42.653437  279095 system_pods.go:61] "kube-apiserver-newest-cni-409154" [62b9084c-0af2-4592-824c-8769ed71635c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 11:58:42.653443  279095 system_pods.go:61] "kube-controller-manager-newest-cni-409154" [bde6fb21-32c3-4da8-909b-20b8b64f6e36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 11:58:42.653447  279095 system_pods.go:61] "kube-proxy-nz6pz" [703e409a-1ced-4b28-babc-6b8d233b9fcd] Running
	I1216 11:58:42.653452  279095 system_pods.go:61] "kube-scheduler-newest-cni-409154" [850ced2f-6dae-40ef-ab2a-c5c66c5cd177] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 11:58:42.653458  279095 system_pods.go:61] "metrics-server-6867b74b74-9tg4t" [ba1ea4d4-8b10-41c3-944a-831cee9abc82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 11:58:42.653464  279095 system_pods.go:61] "storage-provisioner" [b3f6ed1a-bb74-4a9f-81f4-8ee598480fd9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 11:58:42.653473  279095 system_pods.go:74] duration metric: took 5.971424ms to wait for pod list to return data ...
	I1216 11:58:42.653482  279095 default_sa.go:34] waiting for default service account to be created ...
	I1216 11:58:42.656290  279095 default_sa.go:45] found service account: "default"
	I1216 11:58:42.656311  279095 default_sa.go:55] duration metric: took 2.821034ms for default service account to be created ...
	I1216 11:58:42.656325  279095 kubeadm.go:582] duration metric: took 314.954393ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1216 11:58:42.656346  279095 node_conditions.go:102] verifying NodePressure condition ...
	I1216 11:58:42.659184  279095 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 11:58:42.659211  279095 node_conditions.go:123] node cpu capacity is 2
	I1216 11:58:42.659224  279095 node_conditions.go:105] duration metric: took 2.872931ms to run NodePressure ...
	I1216 11:58:42.659239  279095 start.go:241] waiting for startup goroutines ...
	I1216 11:58:42.718023  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1216 11:58:42.718054  279095 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1216 11:58:42.720098  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 11:58:42.720117  279095 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1216 11:58:42.761050  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:58:42.762948  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 11:58:42.772260  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1216 11:58:42.772281  279095 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1216 11:58:42.776710  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 11:58:42.776742  279095 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 11:58:42.815042  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1216 11:58:42.815075  279095 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1216 11:58:42.847205  279095 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 11:58:42.847233  279095 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 11:58:42.858645  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1216 11:58:42.858702  279095 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1216 11:58:42.880891  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1216 11:58:42.880928  279095 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1216 11:58:42.901442  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 11:58:42.952713  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1216 11:58:42.952751  279095 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1216 11:58:43.107941  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1216 11:58:43.107984  279095 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1216 11:58:43.130360  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1216 11:58:43.130386  279095 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1216 11:58:43.190120  279095 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 11:58:43.190147  279095 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1216 11:58:43.217576  279095 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1216 11:58:44.705014  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.942029783s)
	I1216 11:58:44.705086  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705103  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705109  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.803618622s)
	I1216 11:58:44.705121  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.944038036s)
	I1216 11:58:44.705147  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705162  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705162  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705211  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705465  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.705510  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705518  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705534  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705548  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705644  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705658  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705696  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705708  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.705716  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705728  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705739  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.705717  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.705733  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.705955  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.705971  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.706021  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.706032  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.707564  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.707608  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.707647  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.707659  279095 addons.go:475] Verifying addon metrics-server=true in "newest-cni-409154"
	I1216 11:58:44.733968  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.733996  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.734329  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.734355  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.734356  279095 main.go:141] libmachine: (newest-cni-409154) DBG | Closing plugin on server side
	I1216 11:58:44.986437  279095 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.76879955s)
	I1216 11:58:44.986505  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.986524  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.986925  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.986948  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.986958  279095 main.go:141] libmachine: Making call to close driver server
	I1216 11:58:44.986966  279095 main.go:141] libmachine: (newest-cni-409154) Calling .Close
	I1216 11:58:44.987212  279095 main.go:141] libmachine: Successfully made call to close driver server
	I1216 11:58:44.987234  279095 main.go:141] libmachine: Making call to close connection to plugin binary
	I1216 11:58:44.988962  279095 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-409154 addons enable metrics-server
	
	I1216 11:58:44.990322  279095 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1216 11:58:44.991523  279095 addons.go:510] duration metric: took 2.650165363s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1216 11:58:44.991578  279095 start.go:246] waiting for cluster config update ...
	I1216 11:58:44.991599  279095 start.go:255] writing updated cluster config ...
	I1216 11:58:44.991876  279095 ssh_runner.go:195] Run: rm -f paused
	I1216 11:58:45.051986  279095 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1216 11:58:45.053871  279095 out.go:177] * Done! kubectl is now configured to use "newest-cni-409154" cluster and "default" namespace by default
	I1216 11:58:45.512113  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:45.529025  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:45.529084  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:45.563665  276553 cri.go:89] found id: ""
	I1216 11:58:45.563697  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.563708  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:45.563717  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:45.563776  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:45.596079  276553 cri.go:89] found id: ""
	I1216 11:58:45.596119  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.596132  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:45.596140  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:45.596202  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:45.629014  276553 cri.go:89] found id: ""
	I1216 11:58:45.629042  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.629055  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:45.629062  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:45.629128  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:45.671688  276553 cri.go:89] found id: ""
	I1216 11:58:45.671714  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.671725  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:45.671733  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:45.671788  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:45.711944  276553 cri.go:89] found id: ""
	I1216 11:58:45.711977  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.711987  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:45.711994  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:45.712046  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:45.752121  276553 cri.go:89] found id: ""
	I1216 11:58:45.752155  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.752164  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:45.752170  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:45.752230  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:45.785470  276553 cri.go:89] found id: ""
	I1216 11:58:45.785499  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.785510  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:45.785518  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:45.785576  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:45.819346  276553 cri.go:89] found id: ""
	I1216 11:58:45.819374  276553 logs.go:282] 0 containers: []
	W1216 11:58:45.819387  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:45.819399  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:45.819414  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:45.855153  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:45.855199  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:45.906709  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:45.906745  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:45.919757  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:45.919788  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:45.984752  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:45.984779  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:45.984798  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:48.559896  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:48.572393  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:48.572475  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:48.603458  276553 cri.go:89] found id: ""
	I1216 11:58:48.603496  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.603508  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:48.603516  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:48.603582  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:48.639883  276553 cri.go:89] found id: ""
	I1216 11:58:48.639920  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.639931  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:48.639938  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:48.640065  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:48.671045  276553 cri.go:89] found id: ""
	I1216 11:58:48.671070  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.671079  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:48.671085  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:48.671152  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:48.703295  276553 cri.go:89] found id: ""
	I1216 11:58:48.703341  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.703351  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:48.703360  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:48.703428  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:48.736411  276553 cri.go:89] found id: ""
	I1216 11:58:48.736442  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.736451  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:48.736457  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:48.736514  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:48.767332  276553 cri.go:89] found id: ""
	I1216 11:58:48.767375  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.767387  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:48.767396  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:48.767461  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:48.800080  276553 cri.go:89] found id: ""
	I1216 11:58:48.800112  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.800123  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:48.800131  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:48.800197  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:48.832760  276553 cri.go:89] found id: ""
	I1216 11:58:48.832802  276553 logs.go:282] 0 containers: []
	W1216 11:58:48.832814  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:48.832826  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:48.832845  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:48.848815  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:48.848855  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:48.930771  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:48.930794  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:48.930808  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:49.005468  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:49.005511  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:49.040128  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:49.040166  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:51.591281  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:51.603590  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:51.603672  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:51.634226  276553 cri.go:89] found id: ""
	I1216 11:58:51.634255  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.634263  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:51.634270  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:51.634324  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:51.665685  276553 cri.go:89] found id: ""
	I1216 11:58:51.665718  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.665726  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:51.665732  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:51.665783  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:51.697159  276553 cri.go:89] found id: ""
	I1216 11:58:51.697192  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.697200  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:51.697206  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:51.697255  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:51.729513  276553 cri.go:89] found id: ""
	I1216 11:58:51.729543  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.729551  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:51.729556  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:51.729611  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:51.760525  276553 cri.go:89] found id: ""
	I1216 11:58:51.760559  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.760568  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:51.760574  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:51.760634  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:51.791787  276553 cri.go:89] found id: ""
	I1216 11:58:51.791824  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.791835  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:51.791844  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:51.791897  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:51.823131  276553 cri.go:89] found id: ""
	I1216 11:58:51.823166  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.823177  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:51.823186  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:51.823258  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:51.854638  276553 cri.go:89] found id: ""
	I1216 11:58:51.854675  276553 logs.go:282] 0 containers: []
	W1216 11:58:51.854688  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:51.854699  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:51.854720  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:51.903207  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:51.903247  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:51.916182  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:51.916210  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:51.978879  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:51.978906  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:51.978918  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:52.054050  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:52.054087  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:54.592784  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:54.606444  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:54.606511  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:54.641053  276553 cri.go:89] found id: ""
	I1216 11:58:54.641094  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.641106  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:54.641114  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:54.641194  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:54.672984  276553 cri.go:89] found id: ""
	I1216 11:58:54.673018  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.673027  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:54.673032  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:54.673081  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:54.705118  276553 cri.go:89] found id: ""
	I1216 11:58:54.705144  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.705153  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:54.705159  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:54.705210  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:54.735744  276553 cri.go:89] found id: ""
	I1216 11:58:54.735778  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.735791  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:54.735798  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:54.735851  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:54.767983  276553 cri.go:89] found id: ""
	I1216 11:58:54.768012  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.768020  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:54.768027  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:54.768076  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:54.799412  276553 cri.go:89] found id: ""
	I1216 11:58:54.799440  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.799448  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:54.799455  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:54.799506  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:54.830329  276553 cri.go:89] found id: ""
	I1216 11:58:54.830357  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.830365  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:54.830371  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:54.830421  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:54.861544  276553 cri.go:89] found id: ""
	I1216 11:58:54.861573  276553 logs.go:282] 0 containers: []
	W1216 11:58:54.861583  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:54.861593  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:54.861606  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:54.911522  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:54.911562  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:54.923947  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:54.923980  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:55.000816  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:55.000838  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:55.000854  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:58:55.072803  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:55.072845  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:57.608748  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:58:57.622071  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:58:57.622149  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:58:57.653826  276553 cri.go:89] found id: ""
	I1216 11:58:57.653863  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.653876  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 11:58:57.653885  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:58:57.653946  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:58:57.686809  276553 cri.go:89] found id: ""
	I1216 11:58:57.686839  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.686852  276553 logs.go:284] No container was found matching "etcd"
	I1216 11:58:57.686860  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:58:57.686931  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:58:57.719565  276553 cri.go:89] found id: ""
	I1216 11:58:57.719601  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.719613  276553 logs.go:284] No container was found matching "coredns"
	I1216 11:58:57.719622  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:58:57.719676  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:58:57.752279  276553 cri.go:89] found id: ""
	I1216 11:58:57.752318  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.752330  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 11:58:57.752339  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:58:57.752403  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:58:57.785915  276553 cri.go:89] found id: ""
	I1216 11:58:57.785949  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.785961  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 11:58:57.785969  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:58:57.786039  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:58:57.818703  276553 cri.go:89] found id: ""
	I1216 11:58:57.818734  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.818748  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 11:58:57.818754  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:58:57.818821  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:58:57.856323  276553 cri.go:89] found id: ""
	I1216 11:58:57.856362  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.856371  276553 logs.go:284] No container was found matching "kindnet"
	I1216 11:58:57.856377  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 11:58:57.856431  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 11:58:57.888461  276553 cri.go:89] found id: ""
	I1216 11:58:57.888507  276553 logs.go:282] 0 containers: []
	W1216 11:58:57.888515  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 11:58:57.888526  276553 logs.go:123] Gathering logs for container status ...
	I1216 11:58:57.888543  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:58:57.924744  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 11:58:57.924783  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 11:58:57.974915  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 11:58:57.974952  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:58:57.987702  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:58:57.987737  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 11:58:58.047740  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 11:58:58.047764  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:58:58.047779  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:59:00.624270  276553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:59:00.636790  276553 kubeadm.go:597] duration metric: took 4m2.920412851s to restartPrimaryControlPlane
	W1216 11:59:00.636868  276553 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1216 11:59:00.636890  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 11:59:01.078876  276553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:59:01.092675  276553 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:59:01.102060  276553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:59:01.111330  276553 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:59:01.111353  276553 kubeadm.go:157] found existing configuration files:
	
	I1216 11:59:01.111396  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:59:01.120045  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:59:01.120110  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:59:01.128974  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:59:01.137554  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:59:01.137630  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:59:01.146493  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:59:01.154841  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:59:01.154904  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:59:01.163934  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:59:01.172584  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:59:01.172637  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:59:01.181391  276553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 11:59:01.369411  276553 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:00:57.257269  276553 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 12:00:57.257376  276553 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 12:00:57.258891  276553 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 12:00:57.258974  276553 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:00:57.259041  276553 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:00:57.259123  276553 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:00:57.259218  276553 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:00:57.259321  276553 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:00:57.262146  276553 out.go:235]   - Generating certificates and keys ...
	I1216 12:00:57.262267  276553 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:00:57.262347  276553 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:00:57.262465  276553 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:00:57.262571  276553 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:00:57.262667  276553 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:00:57.262717  276553 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:00:57.262791  276553 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:00:57.262860  276553 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:00:57.262924  276553 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:00:57.262996  276553 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:00:57.263030  276553 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:00:57.263084  276553 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:00:57.263135  276553 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:00:57.263181  276553 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:00:57.263235  276553 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:00:57.263281  276553 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:00:57.263373  276553 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:00:57.263445  276553 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:00:57.263481  276553 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:00:57.263542  276553 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:00:57.265255  276553 out.go:235]   - Booting up control plane ...
	I1216 12:00:57.265379  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:00:57.265453  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:00:57.265511  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:00:57.265629  276553 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:00:57.265768  276553 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:00:57.265811  276553 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 12:00:57.265917  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266078  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266159  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266350  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266437  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266649  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266712  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.266895  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.266973  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:00:57.267138  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:00:57.267150  276553 kubeadm.go:310] 
	I1216 12:00:57.267214  276553 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 12:00:57.267271  276553 kubeadm.go:310] 		timed out waiting for the condition
	I1216 12:00:57.267281  276553 kubeadm.go:310] 
	I1216 12:00:57.267334  276553 kubeadm.go:310] 	This error is likely caused by:
	I1216 12:00:57.267378  276553 kubeadm.go:310] 		- The kubelet is not running
	I1216 12:00:57.267488  276553 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 12:00:57.267499  276553 kubeadm.go:310] 
	I1216 12:00:57.267604  276553 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 12:00:57.267659  276553 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 12:00:57.267700  276553 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 12:00:57.267716  276553 kubeadm.go:310] 
	I1216 12:00:57.267867  276553 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 12:00:57.267965  276553 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 12:00:57.267976  276553 kubeadm.go:310] 
	I1216 12:00:57.268074  276553 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 12:00:57.268144  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 12:00:57.268210  276553 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 12:00:57.268279  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 12:00:57.268328  276553 kubeadm.go:310] 
	W1216 12:00:57.268428  276553 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1216 12:00:57.268489  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1216 12:00:57.717860  276553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 12:00:57.733963  276553 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 12:00:57.744259  276553 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 12:00:57.744288  276553 kubeadm.go:157] found existing configuration files:
	
	I1216 12:00:57.744336  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 12:00:57.753893  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 12:00:57.753977  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 12:00:57.764071  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 12:00:57.773595  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 12:00:57.773682  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 12:00:57.783828  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 12:00:57.793769  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 12:00:57.793839  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 12:00:57.803766  276553 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 12:00:57.813437  276553 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 12:00:57.813513  276553 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 12:00:57.823881  276553 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 12:00:57.888749  276553 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1216 12:00:57.888835  276553 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 12:00:58.038785  276553 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 12:00:58.038916  276553 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 12:00:58.039088  276553 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1216 12:00:58.223884  276553 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 12:00:58.225611  276553 out.go:235]   - Generating certificates and keys ...
	I1216 12:00:58.225731  276553 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 12:00:58.225852  276553 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 12:00:58.225980  276553 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1216 12:00:58.226074  276553 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1216 12:00:58.226178  276553 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1216 12:00:58.226255  276553 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1216 12:00:58.226344  276553 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1216 12:00:58.226424  276553 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1216 12:00:58.226551  276553 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1216 12:00:58.226688  276553 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1216 12:00:58.226756  276553 kubeadm.go:310] [certs] Using the existing "sa" key
	I1216 12:00:58.226821  276553 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 12:00:58.353567  276553 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 12:00:58.694503  276553 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 12:00:58.792660  276553 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 12:00:59.086043  276553 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 12:00:59.108391  276553 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 12:00:59.108558  276553 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 12:00:59.108623  276553 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 12:00:59.247927  276553 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 12:00:59.249627  276553 out.go:235]   - Booting up control plane ...
	I1216 12:00:59.249774  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 12:00:59.251436  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 12:00:59.254163  276553 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 12:00:59.257479  276553 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 12:00:59.261730  276553 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1216 12:01:39.263454  276553 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1216 12:01:39.263569  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:39.263847  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:01:44.264678  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:44.264927  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:01:54.265352  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:01:54.265639  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:14.265999  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:02:14.266235  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:54.265070  276553 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1216 12:02:54.265312  276553 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1216 12:02:54.265327  276553 kubeadm.go:310] 
	I1216 12:02:54.265385  276553 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1216 12:02:54.265445  276553 kubeadm.go:310] 		timed out waiting for the condition
	I1216 12:02:54.265455  276553 kubeadm.go:310] 
	I1216 12:02:54.265515  276553 kubeadm.go:310] 	This error is likely caused by:
	I1216 12:02:54.265563  276553 kubeadm.go:310] 		- The kubelet is not running
	I1216 12:02:54.265722  276553 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1216 12:02:54.265750  276553 kubeadm.go:310] 
	I1216 12:02:54.265890  276553 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1216 12:02:54.265936  276553 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1216 12:02:54.265973  276553 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1216 12:02:54.265995  276553 kubeadm.go:310] 
	I1216 12:02:54.266136  276553 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1216 12:02:54.266255  276553 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1216 12:02:54.266265  276553 kubeadm.go:310] 
	I1216 12:02:54.266405  276553 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1216 12:02:54.266530  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1216 12:02:54.266638  276553 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1216 12:02:54.266729  276553 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1216 12:02:54.266748  276553 kubeadm.go:310] 
	I1216 12:02:54.267271  276553 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 12:02:54.267355  276553 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1216 12:02:54.267426  276553 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1216 12:02:54.267491  276553 kubeadm.go:394] duration metric: took 7m56.598620484s to StartCluster
	I1216 12:02:54.267542  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 12:02:54.267613  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 12:02:54.301812  276553 cri.go:89] found id: ""
	I1216 12:02:54.301847  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.301855  276553 logs.go:284] No container was found matching "kube-apiserver"
	I1216 12:02:54.301863  276553 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 12:02:54.301917  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 12:02:54.334730  276553 cri.go:89] found id: ""
	I1216 12:02:54.334768  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.334780  276553 logs.go:284] No container was found matching "etcd"
	I1216 12:02:54.334788  276553 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 12:02:54.334853  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 12:02:54.366080  276553 cri.go:89] found id: ""
	I1216 12:02:54.366115  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.366128  276553 logs.go:284] No container was found matching "coredns"
	I1216 12:02:54.366136  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 12:02:54.366202  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 12:02:54.396447  276553 cri.go:89] found id: ""
	I1216 12:02:54.396483  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.396495  276553 logs.go:284] No container was found matching "kube-scheduler"
	I1216 12:02:54.396503  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 12:02:54.396584  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 12:02:54.429291  276553 cri.go:89] found id: ""
	I1216 12:02:54.429326  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.429337  276553 logs.go:284] No container was found matching "kube-proxy"
	I1216 12:02:54.429345  276553 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 12:02:54.429409  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 12:02:54.460235  276553 cri.go:89] found id: ""
	I1216 12:02:54.460268  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.460276  276553 logs.go:284] No container was found matching "kube-controller-manager"
	I1216 12:02:54.460283  276553 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 12:02:54.460334  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 12:02:54.492739  276553 cri.go:89] found id: ""
	I1216 12:02:54.492771  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.492780  276553 logs.go:284] No container was found matching "kindnet"
	I1216 12:02:54.492787  276553 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1216 12:02:54.492840  276553 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1216 12:02:54.524322  276553 cri.go:89] found id: ""
	I1216 12:02:54.524358  276553 logs.go:282] 0 containers: []
	W1216 12:02:54.524369  276553 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1216 12:02:54.524384  276553 logs.go:123] Gathering logs for kubelet ...
	I1216 12:02:54.524400  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1216 12:02:54.575979  276553 logs.go:123] Gathering logs for dmesg ...
	I1216 12:02:54.576022  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 12:02:54.591148  276553 logs.go:123] Gathering logs for describe nodes ...
	I1216 12:02:54.591184  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1216 12:02:54.704231  276553 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1216 12:02:54.704259  276553 logs.go:123] Gathering logs for CRI-O ...
	I1216 12:02:54.704277  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 12:02:54.804001  276553 logs.go:123] Gathering logs for container status ...
	I1216 12:02:54.804047  276553 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1216 12:02:54.842021  276553 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1216 12:02:54.842097  276553 out.go:270] * 
	W1216 12:02:54.842173  276553 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 12:02:54.842192  276553 out.go:270] * 
	W1216 12:02:54.843372  276553 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1216 12:02:54.847542  276553 out.go:201] 
	W1216 12:02:54.848991  276553 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1216 12:02:54.849037  276553 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1216 12:02:54.849054  276553 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1216 12:02:54.850514  276553 out.go:201] 
	
	
	==> CRI-O <==
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.656651251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734351458656631278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bfd1cf7-713c-48eb-b16c-679430353fd3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.657128191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=581b9b9c-a8fa-4ec0-9b69-2477037b27f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.657188293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=581b9b9c-a8fa-4ec0-9b69-2477037b27f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.657222279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=581b9b9c-a8fa-4ec0-9b69-2477037b27f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.685340680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67236a28-ddd5-40e2-88da-673b3e2f3649 name=/runtime.v1.RuntimeService/Version
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.685432737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67236a28-ddd5-40e2-88da-673b3e2f3649 name=/runtime.v1.RuntimeService/Version
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.686401874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af992548-16b2-4205-b913-32aacc71295d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.686764166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734351458686742267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af992548-16b2-4205-b913-32aacc71295d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.687255581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=104e2fe1-7502-43a6-a66d-cdff284279ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.687377938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=104e2fe1-7502-43a6-a66d-cdff284279ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.687412029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=104e2fe1-7502-43a6-a66d-cdff284279ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.716753971Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=00be89a4-bd1b-47e0-9748-ed1a842cd85d name=/runtime.v1.RuntimeService/Version
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.716872723Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=00be89a4-bd1b-47e0-9748-ed1a842cd85d name=/runtime.v1.RuntimeService/Version
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.717994317Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b863855-0b57-4fea-a1ae-c4c0263c30e5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.718380134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734351458718358531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b863855-0b57-4fea-a1ae-c4c0263c30e5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.718850540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b51ed0a-7208-49c4-a3b2-4fbb0d525f66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.718897605Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b51ed0a-7208-49c4-a3b2-4fbb0d525f66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.718971776Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9b51ed0a-7208-49c4-a3b2-4fbb0d525f66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.747755274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bcc46978-3261-4e4b-9c98-1c20e5007e14 name=/runtime.v1.RuntimeService/Version
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.747846737Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bcc46978-3261-4e4b-9c98-1c20e5007e14 name=/runtime.v1.RuntimeService/Version
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.748987682Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d05b927-6838-421b-8e6c-ef7d1f32d4b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.749381075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734351458749357497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d05b927-6838-421b-8e6c-ef7d1f32d4b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.749880876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6e6e5b1-8d04-40e9-a45f-b3153858fdd8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.749977246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6e6e5b1-8d04-40e9-a45f-b3153858fdd8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 12:17:38 old-k8s-version-933974 crio[633]: time="2024-12-16 12:17:38.750011289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a6e6e5b1-8d04-40e9-a45f-b3153858fdd8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec16 11:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051890] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038428] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.993369] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.061080] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.582390] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.514537] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.064570] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063777] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.185160] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.127891] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.247553] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +6.349244] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.059200] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.994875] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[Dec16 11:55] kauditd_printk_skb: 46 callbacks suppressed
	[Dec16 11:59] systemd-fstab-generator[5025]: Ignoring "noauto" option for root device
	[Dec16 12:00] systemd-fstab-generator[5310]: Ignoring "noauto" option for root device
	[  +0.068159] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:17:38 up 23 min,  0 users,  load average: 0.00, 0.01, 0.00
	Linux old-k8s-version-933974 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc00037f180, 0xc000a65bc0, 0xc000a65bc0, 0x0, 0x0)
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008af180)
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]: goroutine 150 [runnable]:
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]: runtime.Gosched(...)
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]:         /usr/local/go/src/runtime/proc.go:271
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0002d8f00, 0x0, 0x0)
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:549 +0x1a5
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008af180)
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 16 12:17:33 old-k8s-version-933974 kubelet[7124]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Dec 16 12:17:33 old-k8s-version-933974 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 16 12:17:33 old-k8s-version-933974 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 16 12:17:34 old-k8s-version-933974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 173.
	Dec 16 12:17:34 old-k8s-version-933974 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 16 12:17:34 old-k8s-version-933974 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 16 12:17:34 old-k8s-version-933974 kubelet[7133]: I1216 12:17:34.436139    7133 server.go:416] Version: v1.20.0
	Dec 16 12:17:34 old-k8s-version-933974 kubelet[7133]: I1216 12:17:34.436402    7133 server.go:837] Client rotation is on, will bootstrap in background
	Dec 16 12:17:34 old-k8s-version-933974 kubelet[7133]: I1216 12:17:34.438268    7133 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 16 12:17:34 old-k8s-version-933974 kubelet[7133]: W1216 12:17:34.439216    7133 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 16 12:17:34 old-k8s-version-933974 kubelet[7133]: I1216 12:17:34.439360    7133 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-933974 -n old-k8s-version-933974
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 2 (247.70227ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-933974" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (341.22s)

                                                
                                    

Test pass (282/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.29
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.16
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.2/json-events 5.82
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.15
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.65
22 TestOffline 114.83
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 130.12
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 8.52
35 TestAddons/parallel/Registry 18
37 TestAddons/parallel/InspektorGadget 12.11
40 TestAddons/parallel/CSI 47.35
41 TestAddons/parallel/Headlamp 20.38
42 TestAddons/parallel/CloudSpanner 5.55
43 TestAddons/parallel/LocalPath 54.35
44 TestAddons/parallel/NvidiaDevicePlugin 7.07
45 TestAddons/parallel/Yakd 12.43
47 TestAddons/StoppedEnableDisable 91.28
48 TestCertOptions 103.17
49 TestCertExpiration 332.75
51 TestForceSystemdFlag 44.53
52 TestForceSystemdEnv 71.49
54 TestKVMDriverInstallOrUpdate 4.03
58 TestErrorSpam/setup 40.65
59 TestErrorSpam/start 0.39
60 TestErrorSpam/status 0.79
61 TestErrorSpam/pause 1.53
62 TestErrorSpam/unpause 1.76
63 TestErrorSpam/stop 5.86
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 54.8
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 361.48
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.51
75 TestFunctional/serial/CacheCmd/cache/add_local 1.96
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.73
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 60.38
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.44
87 TestFunctional/serial/InvalidService 4.44
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 20.15
91 TestFunctional/parallel/DryRun 0.31
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.82
97 TestFunctional/parallel/ServiceCmdConnect 11.51
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 39.99
101 TestFunctional/parallel/SSHCmd 0.46
102 TestFunctional/parallel/CpCmd 1.62
103 TestFunctional/parallel/MySQL 25.33
104 TestFunctional/parallel/FileSync 0.2
105 TestFunctional/parallel/CertSync 1.28
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
113 TestFunctional/parallel/License 0.28
114 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
117 TestFunctional/parallel/ProfileCmd/profile_list 0.42
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.26
121 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
122 TestFunctional/parallel/ServiceCmd/DeployApp 13.17
123 TestFunctional/parallel/MountCmd/any-port 7.73
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
130 TestFunctional/parallel/ServiceCmd/List 0.47
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
132 TestFunctional/parallel/Version/short 0.05
133 TestFunctional/parallel/Version/components 0.54
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.95
140 TestFunctional/parallel/ImageCommands/Setup 1.53
141 TestFunctional/parallel/ServiceCmd/Format 0.28
142 TestFunctional/parallel/ServiceCmd/URL 0.28
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.65
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.89
149 TestFunctional/parallel/MountCmd/specific-port 1.83
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.83
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.49
152 TestFunctional/parallel/ImageCommands/ImageRemove 1.93
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.71
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 198.4
162 TestMultiControlPlane/serial/DeployApp 5.47
163 TestMultiControlPlane/serial/PingHostFromPods 1.2
164 TestMultiControlPlane/serial/AddWorkerNode 54.29
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
167 TestMultiControlPlane/serial/CopyFile 13.09
168 TestMultiControlPlane/serial/StopSecondaryNode 91.42
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
170 TestMultiControlPlane/serial/RestartSecondaryNode 49.12
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 443.01
173 TestMultiControlPlane/serial/DeleteSecondaryNode 16.63
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
175 TestMultiControlPlane/serial/StopCluster 272.94
176 TestMultiControlPlane/serial/RestartCluster 125.48
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
178 TestMultiControlPlane/serial/AddSecondaryNode 75.65
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
183 TestJSONOutput/start/Command 56.12
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.69
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.65
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.68
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.2
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 88.41
215 TestMountStart/serial/StartWithMountFirst 24.17
216 TestMountStart/serial/VerifyMountFirst 0.39
217 TestMountStart/serial/StartWithMountSecond 31.07
218 TestMountStart/serial/VerifyMountSecond 0.5
219 TestMountStart/serial/DeleteFirst 0.89
220 TestMountStart/serial/VerifyMountPostDelete 0.39
221 TestMountStart/serial/Stop 1.29
222 TestMountStart/serial/RestartStopped 21.67
223 TestMountStart/serial/VerifyMountPostStop 0.38
226 TestMultiNode/serial/FreshStart2Nodes 107.96
227 TestMultiNode/serial/DeployApp2Nodes 5.12
228 TestMultiNode/serial/PingHostFrom2Pods 0.78
229 TestMultiNode/serial/AddNode 51.76
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.59
232 TestMultiNode/serial/CopyFile 7.46
233 TestMultiNode/serial/StopNode 2.33
234 TestMultiNode/serial/StartAfterStop 37.36
235 TestMultiNode/serial/RestartKeepsNodes 340.44
236 TestMultiNode/serial/DeleteNode 2.21
237 TestMultiNode/serial/StopMultiNode 181.9
238 TestMultiNode/serial/RestartMultiNode 114.37
239 TestMultiNode/serial/ValidateNameConflict 45.1
246 TestScheduledStopUnix 110.51
250 TestRunningBinaryUpgrade 200.81
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 122.61
264 TestNetworkPlugins/group/false 3.31
268 TestNoKubernetes/serial/StartWithStopK8s 47.87
269 TestNoKubernetes/serial/Start 41.55
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
271 TestNoKubernetes/serial/ProfileList 1.77
272 TestNoKubernetes/serial/Stop 1.3
273 TestNoKubernetes/serial/StartNoArgs 43.67
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
275 TestStoppedBinaryUpgrade/Setup 0.65
276 TestStoppedBinaryUpgrade/Upgrade 93.35
285 TestPause/serial/Start 65.04
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.92
287 TestNetworkPlugins/group/auto/Start 59.35
288 TestNetworkPlugins/group/kindnet/Start 73.31
289 TestPause/serial/SecondStartNoReconfiguration 51.43
290 TestNetworkPlugins/group/auto/KubeletFlags 0.24
291 TestNetworkPlugins/group/auto/NetCatPod 13.24
292 TestNetworkPlugins/group/auto/DNS 0.18
293 TestNetworkPlugins/group/auto/Localhost 0.14
294 TestNetworkPlugins/group/auto/HairPin 0.15
295 TestNetworkPlugins/group/calico/Start 74.7
296 TestPause/serial/Pause 0.76
297 TestPause/serial/VerifyStatus 0.27
298 TestPause/serial/Unpause 0.71
299 TestPause/serial/PauseAgain 0.89
300 TestPause/serial/DeletePaused 1.12
301 TestPause/serial/VerifyDeletedResources 0.51
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/custom-flannel/Start 88.36
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
305 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
306 TestNetworkPlugins/group/kindnet/DNS 0.14
307 TestNetworkPlugins/group/kindnet/Localhost 0.12
308 TestNetworkPlugins/group/kindnet/HairPin 0.11
309 TestNetworkPlugins/group/enable-default-cni/Start 69.98
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/calico/KubeletFlags 0.29
312 TestNetworkPlugins/group/calico/NetCatPod 14.27
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
315 TestNetworkPlugins/group/calico/DNS 0.16
316 TestNetworkPlugins/group/calico/Localhost 0.15
317 TestNetworkPlugins/group/calico/HairPin 0.16
318 TestNetworkPlugins/group/custom-flannel/DNS 0.17
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
323 TestNetworkPlugins/group/flannel/Start 74.61
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
327 TestNetworkPlugins/group/bridge/Start 84.06
331 TestStartStop/group/no-preload/serial/FirstStart 106.76
332 TestNetworkPlugins/group/flannel/ControllerPod 6.01
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
334 TestNetworkPlugins/group/flannel/NetCatPod 11.23
335 TestNetworkPlugins/group/flannel/DNS 0.21
336 TestNetworkPlugins/group/flannel/Localhost 0.14
337 TestNetworkPlugins/group/flannel/HairPin 0.15
338 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
339 TestNetworkPlugins/group/bridge/NetCatPod 12.94
340 TestNetworkPlugins/group/bridge/DNS 0.15
341 TestNetworkPlugins/group/bridge/Localhost 0.13
342 TestNetworkPlugins/group/bridge/HairPin 0.13
344 TestStartStop/group/embed-certs/serial/FirstStart 63.44
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.43
347 TestStartStop/group/no-preload/serial/DeployApp 10.31
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.25
349 TestStartStop/group/no-preload/serial/Stop 91.07
350 TestStartStop/group/embed-certs/serial/DeployApp 13.3
351 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
352 TestStartStop/group/embed-certs/serial/Stop 91.02
353 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
355 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.32
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
357 TestStartStop/group/no-preload/serial/SecondStart 350.78
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
359 TestStartStop/group/embed-certs/serial/SecondStart 303.75
360 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
361 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 301.51
364 TestStartStop/group/old-k8s-version/serial/Stop 1.34
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
369 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
370 TestStartStop/group/embed-certs/serial/Pause 2.84
372 TestStartStop/group/newest-cni/serial/FirstStart 46.16
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
375 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.01
376 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
377 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.58
378 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
379 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.43
380 TestStartStop/group/no-preload/serial/Pause 3.53
381 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
383 TestStartStop/group/newest-cni/serial/Stop 10.31
384 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
385 TestStartStop/group/newest-cni/serial/SecondStart 35.03
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
389 TestStartStop/group/newest-cni/serial/Pause 2.65
x
+
TestDownloadOnly/v1.20.0/json-events (12.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-893315 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-893315 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.293809619s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1216 10:31:30.244102  217519 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1216 10:31:30.244219  217519 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-893315
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-893315: exit status 85 (69.699932ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-893315 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC |          |
	|         | -p download-only-893315        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 10:31:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 10:31:17.997483  217532 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:31:17.997606  217532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:31:17.997617  217532 out.go:358] Setting ErrFile to fd 2...
	I1216 10:31:17.997621  217532 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:31:17.997804  217532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	W1216 10:31:17.997954  217532 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20107-210204/.minikube/config/config.json: open /home/jenkins/minikube-integration/20107-210204/.minikube/config/config.json: no such file or directory
	I1216 10:31:17.998558  217532 out.go:352] Setting JSON to true
	I1216 10:31:17.999564  217532 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8025,"bootTime":1734337053,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:31:17.999694  217532 start.go:139] virtualization: kvm guest
	I1216 10:31:18.002434  217532 out.go:97] [download-only-893315] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1216 10:31:18.002598  217532 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 10:31:18.002696  217532 notify.go:220] Checking for updates...
	I1216 10:31:18.004209  217532 out.go:169] MINIKUBE_LOCATION=20107
	I1216 10:31:18.005658  217532 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:31:18.007125  217532 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 10:31:18.008802  217532 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 10:31:18.010379  217532 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 10:31:18.013575  217532 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 10:31:18.013808  217532 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:31:18.121025  217532 out.go:97] Using the kvm2 driver based on user configuration
	I1216 10:31:18.121060  217532 start.go:297] selected driver: kvm2
	I1216 10:31:18.121068  217532 start.go:901] validating driver "kvm2" against <nil>
	I1216 10:31:18.121477  217532 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 10:31:18.121631  217532 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 10:31:18.138526  217532 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 10:31:18.138610  217532 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 10:31:18.139211  217532 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1216 10:31:18.139366  217532 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 10:31:18.139398  217532 cni.go:84] Creating CNI manager for ""
	I1216 10:31:18.139448  217532 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 10:31:18.139454  217532 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 10:31:18.139504  217532 start.go:340] cluster config:
	{Name:download-only-893315 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-893315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:31:18.139693  217532 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 10:31:18.141793  217532 out.go:97] Downloading VM boot image ...
	I1216 10:31:18.141849  217532 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20107-210204/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1216 10:31:21.826225  217532 out.go:97] Starting "download-only-893315" primary control-plane node in "download-only-893315" cluster
	I1216 10:31:21.826273  217532 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 10:31:21.851513  217532 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1216 10:31:21.851568  217532 cache.go:56] Caching tarball of preloaded images
	I1216 10:31:21.851767  217532 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 10:31:21.853716  217532 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1216 10:31:21.853744  217532 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1216 10:31:21.881053  217532 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-893315 host does not exist
	  To start a cluster, run: "minikube start -p download-only-893315"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-893315
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (5.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-270974 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-270974 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.814993618s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (5.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1216 10:31:36.432622  217519 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1216 10:31:36.432663  217519 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-270974
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-270974: exit status 85 (69.178812ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-893315 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC |                     |
	|         | -p download-only-893315        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:31 UTC |
	| delete  | -p download-only-893315        | download-only-893315 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC | 16 Dec 24 10:31 UTC |
	| start   | -o=json --download-only        | download-only-270974 | jenkins | v1.34.0 | 16 Dec 24 10:31 UTC |                     |
	|         | -p download-only-270974        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 10:31:30
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 10:31:30.664095  217736 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:31:30.664387  217736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:31:30.664398  217736 out.go:358] Setting ErrFile to fd 2...
	I1216 10:31:30.664404  217736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:31:30.664609  217736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 10:31:30.665337  217736 out.go:352] Setting JSON to true
	I1216 10:31:30.666363  217736 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8038,"bootTime":1734337053,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:31:30.666476  217736 start.go:139] virtualization: kvm guest
	I1216 10:31:30.668845  217736 out.go:97] [download-only-270974] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 10:31:30.669076  217736 notify.go:220] Checking for updates...
	I1216 10:31:30.670517  217736 out.go:169] MINIKUBE_LOCATION=20107
	I1216 10:31:30.672049  217736 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:31:30.673615  217736 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 10:31:30.674957  217736 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 10:31:30.676422  217736 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 10:31:30.679098  217736 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 10:31:30.679410  217736 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:31:30.714342  217736 out.go:97] Using the kvm2 driver based on user configuration
	I1216 10:31:30.714381  217736 start.go:297] selected driver: kvm2
	I1216 10:31:30.714390  217736 start.go:901] validating driver "kvm2" against <nil>
	I1216 10:31:30.714797  217736 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 10:31:30.714903  217736 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20107-210204/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1216 10:31:30.732856  217736 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1216 10:31:30.732950  217736 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 10:31:30.733529  217736 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1216 10:31:30.733718  217736 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 10:31:30.733758  217736 cni.go:84] Creating CNI manager for ""
	I1216 10:31:30.733815  217736 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 10:31:30.733827  217736 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 10:31:30.733895  217736 start.go:340] cluster config:
	{Name:download-only-270974 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-270974 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:31:30.734007  217736 iso.go:125] acquiring lock: {Name:mk183fa0ccac365696b97d53fc41bbe189a71824 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 10:31:30.736036  217736 out.go:97] Starting "download-only-270974" primary control-plane node in "download-only-270974" cluster
	I1216 10:31:30.736067  217736 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:31:30.792474  217736 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1216 10:31:30.792521  217736 cache.go:56] Caching tarball of preloaded images
	I1216 10:31:30.792721  217736 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 10:31:30.794726  217736 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1216 10:31:30.794757  217736 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1216 10:31:30.820829  217736 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1216 10:31:34.937223  217736 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1216 10:31:34.937338  217736 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20107-210204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-270974 host does not exist
	  To start a cluster, run: "minikube start -p download-only-270974"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-270974
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 10:31:37.073065  217519 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-453115 --alsologtostderr --binary-mirror http://127.0.0.1:40829 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-453115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-453115
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (114.83s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-872100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-872100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m53.89211808s)
helpers_test.go:175: Cleaning up "offline-crio-872100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-872100
--- PASS: TestOffline (114.83s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-020871
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-020871: exit status 85 (61.027513ms)

                                                
                                                
-- stdout --
	* Profile "addons-020871" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-020871"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-020871
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-020871: exit status 85 (60.077369ms)

                                                
                                                
-- stdout --
	* Profile "addons-020871" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-020871"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (130.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-020871 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-020871 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m10.116565467s)
--- PASS: TestAddons/Setup (130.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-020871 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-020871 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-020871 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-020871 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8dbdf275-6cac-47d9-a5f1-d03fff3bb404] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8dbdf275-6cac-47d9-a5f1-d03fff3bb404] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.00520648s
addons_test.go:633: (dbg) Run:  kubectl --context addons-020871 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-020871 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-020871 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.286105ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-r6zm6" [80b40373-c14b-4d26-ba1f-d0eab35d8a56] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005835996s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qk5tx" [302e1efd-762f-487a-96d5-b24b982f648f] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003791428s
addons_test.go:331: (dbg) Run:  kubectl --context addons-020871 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-020871 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-020871 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.744520354s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 ip
2024/12/16 10:34:33 [DEBUG] GET http://192.168.39.206:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 addons disable registry --alsologtostderr -v=1: (1.077120627s)
--- PASS: TestAddons/parallel/Registry (18.00s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hb5nv" [be4e3e1c-7931-47b1-bf08-01626212d6f9] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004974895s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 addons disable inspektor-gadget --alsologtostderr -v=1: (6.104804045s)
--- PASS: TestAddons/parallel/InspektorGadget (12.11s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 10:34:24.116446  217519 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 10:34:24.148751  217519 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 10:34:24.148791  217519 kapi.go:107] duration metric: took 32.366017ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 32.38028ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-020871 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-020871 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [877378ca-d9b4-45e4-92f6-df4d7336c95a] Pending
helpers_test.go:344: "task-pv-pod" [877378ca-d9b4-45e4-92f6-df4d7336c95a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [877378ca-d9b4-45e4-92f6-df4d7336c95a] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004045174s
addons_test.go:511: (dbg) Run:  kubectl --context addons-020871 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-020871 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-020871 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-020871 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-020871 delete pod task-pv-pod: (1.060691791s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-020871 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-020871 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-020871 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a22d7fd4-d3f5-4ec3-a4c2-8d86dd5c9880] Pending
helpers_test.go:344: "task-pv-pod-restore" [a22d7fd4-d3f5-4ec3-a4c2-8d86dd5c9880] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a22d7fd4-d3f5-4ec3-a4c2-8d86dd5c9880] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004261024s
addons_test.go:553: (dbg) Run:  kubectl --context addons-020871 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-020871 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-020871 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 addons disable volumesnapshots --alsologtostderr -v=1: (1.308062591s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.910201448s)
--- PASS: TestAddons/parallel/CSI (47.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-020871 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-zmsmx" [2c42d0b7-344b-4b9f-8cc5-f57f78c107de] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-zmsmx" [2c42d0b7-344b-4b9f-8cc5-f57f78c107de] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-zmsmx" [2c42d0b7-344b-4b9f-8cc5-f57f78c107de] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004211935s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 addons disable headlamp --alsologtostderr -v=1: (6.473379734s)
--- PASS: TestAddons/parallel/Headlamp (20.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-kgxr6" [4206a092-8638-460d-af9e-4b7f12e77886] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004529704s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.35s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-020871 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-020871 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-020871 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [03d9a9ab-2b3c-4d4b-8cbf-4971cf628aae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [03d9a9ab-2b3c-4d4b-8cbf-4971cf628aae] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [03d9a9ab-2b3c-4d4b-8cbf-4971cf628aae] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.042600902s
addons_test.go:906: (dbg) Run:  kubectl --context addons-020871 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 ssh "cat /opt/local-path-provisioner/pvc-2768e9dc-d30c-44a0-aa98-3d81d07df32d_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-020871 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-020871 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.47769745s)
--- PASS: TestAddons/parallel/LocalPath (54.35s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.07s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-z7nb7" [5897e921-e086-496a-8865-2c37fd8ea3bd] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007909112s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.059869157s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.07s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-svkvw" [7853610e-6afd-4b41-8bec-56d42e9fa391] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.008185453s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-020871 addons disable yakd --alsologtostderr -v=1: (6.424710305s)
--- PASS: TestAddons/parallel/Yakd (12.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-020871
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-020871: (1m30.972629335s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-020871
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-020871
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-020871
--- PASS: TestAddons/StoppedEnableDisable (91.28s)

                                                
                                    
x
+
TestCertOptions (103.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-533676 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-533676 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m41.628830124s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-533676 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-533676 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-533676 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-533676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-533676
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-533676: (1.006348141s)
--- PASS: TestCertOptions (103.17s)

                                                
                                    
x
+
TestCertExpiration (332.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-002454 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1216 11:38:48.522202  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-002454 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m41.471659009s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-002454 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-002454 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (50.443393619s)
helpers_test.go:175: Cleaning up "cert-expiration-002454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-002454
--- PASS: TestCertExpiration (332.75s)

                                                
                                    
x
+
TestForceSystemdFlag (44.53s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-318691 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-318691 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.488794223s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-318691 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-318691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-318691
--- PASS: TestForceSystemdFlag (44.53s)

                                                
                                    
x
+
TestForceSystemdEnv (71.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-931301 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-931301 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.482877697s)
helpers_test.go:175: Cleaning up "force-systemd-env-931301" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-931301
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-931301: (1.006426805s)
--- PASS: TestForceSystemdEnv (71.49s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.03s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1216 11:40:01.260887  217519 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 11:40:01.261136  217519 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1216 11:40:01.294286  217519 install.go:62] docker-machine-driver-kvm2: exit status 1
W1216 11:40:01.294690  217519 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1216 11:40:01.294774  217519 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3816353740/001/docker-machine-driver-kvm2
I1216 11:40:01.597511  217519 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3816353740/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240] Decompressors:map[bz2:0xc00059b4b0 gz:0xc00059b4b8 tar:0xc00059b430 tar.bz2:0xc00059b440 tar.gz:0xc00059b460 tar.xz:0xc00059b470 tar.zst:0xc00059b490 tbz2:0xc00059b440 tgz:0xc00059b460 txz:0xc00059b470 tzst:0xc00059b490 xz:0xc00059b4c0 zip:0xc00059b4e0 zst:0xc00059b4c8] Getters:map[file:0xc000ad5830 http:0xc00072b270 https:0xc00072b2c0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1216 11:40:01.597583  217519 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3816353740/001/docker-machine-driver-kvm2
I1216 11:40:03.522812  217519 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 11:40:03.522911  217519 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1216 11:40:03.555184  217519 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1216 11:40:03.555215  217519 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1216 11:40:03.555279  217519 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1216 11:40:03.555312  217519 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3816353740/002/docker-machine-driver-kvm2
I1216 11:40:03.607016  217519 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3816353740/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240 0x532a240] Decompressors:map[bz2:0xc00059b4b0 gz:0xc00059b4b8 tar:0xc00059b430 tar.bz2:0xc00059b440 tar.gz:0xc00059b460 tar.xz:0xc00059b470 tar.zst:0xc00059b490 tbz2:0xc00059b440 tgz:0xc00059b460 txz:0xc00059b470 tzst:0xc00059b490 xz:0xc00059b4c0 zip:0xc00059b4e0 zst:0xc00059b4c8] Getters:map[file:0xc002432a30 http:0xc002434af0 https:0xc002434b40] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1216 11:40:03.607062  217519 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3816353740/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.03s)

                                                
                                    
x
+
TestErrorSpam/setup (40.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-028105 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-028105 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-028105 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-028105 --driver=kvm2  --container-runtime=crio: (40.651652836s)
--- PASS: TestErrorSpam/setup (40.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (5.86s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 stop: (2.315051399s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 stop: (2.079863577s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-028105 --log_dir /tmp/nospam-028105 stop: (1.46946798s)
--- PASS: TestErrorSpam/stop (5.86s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20107-210204/.minikube/files/etc/test/nested/copy/217519/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365716 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1216 10:43:48.522255  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:43:48.528696  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:43:48.540100  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:43:48.561518  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:43:48.602981  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:43:48.684487  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:43:48.846063  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:43:49.167641  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-365716 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (54.799344217s)
--- PASS: TestFunctional/serial/StartWithProxy (54.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (361.48s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 10:43:49.168648  217519 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365716 --alsologtostderr -v=8
E1216 10:43:49.808997  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:43:51.090736  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:43:53.653723  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:43:58.776004  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:09.018131  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:44:29.499863  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:45:10.462628  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:46:32.384526  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:48:48.521998  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:49:16.226497  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-365716 --alsologtostderr -v=8: (6m1.477074731s)
functional_test.go:663: soft start took 6m1.477873561s for "functional-365716" cluster.
I1216 10:49:50.646143  217519 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (361.48s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-365716 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 cache add registry.k8s.io/pause:3.1: (1.17054258s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 cache add registry.k8s.io/pause:3.3: (1.194901329s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 cache add registry.k8s.io/pause:latest: (1.149080067s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-365716 /tmp/TestFunctionalserialCacheCmdcacheadd_local3089342713/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 cache add minikube-local-cache-test:functional-365716
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 cache add minikube-local-cache-test:functional-365716: (1.640132524s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 cache delete minikube-local-cache-test:functional-365716
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-365716
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365716 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.988083ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 cache reload: (1.052911617s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 kubectl -- --context functional-365716 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-365716 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (60.38s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365716 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-365716 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m0.378466604s)
functional_test.go:761: restart took 1m0.37860279s for "functional-365716" cluster.
I1216 10:50:59.001064  217519 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (60.38s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-365716 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 logs: (1.45933692s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 logs --file /tmp/TestFunctionalserialLogsFileCmd1967194932/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 logs --file /tmp/TestFunctionalserialLogsFileCmd1967194932/001/logs.txt: (1.43746415s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.44s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-365716 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-365716
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-365716: exit status 115 (270.063189ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.192:31141 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-365716 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.44s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365716 config get cpus: exit status 14 (65.479064ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365716 config get cpus: exit status 14 (83.168206ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (20.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-365716 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-365716 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 228286: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (20.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365716 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-365716 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (159.361786ms)

                                                
                                                
-- stdout --
	* [functional-365716] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 10:51:29.811434  227977 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:51:29.811745  227977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:51:29.811761  227977 out.go:358] Setting ErrFile to fd 2...
	I1216 10:51:29.811769  227977 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:51:29.812084  227977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 10:51:29.812819  227977 out.go:352] Setting JSON to false
	I1216 10:51:29.814412  227977 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9237,"bootTime":1734337053,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:51:29.814515  227977 start.go:139] virtualization: kvm guest
	I1216 10:51:29.817205  227977 out.go:177] * [functional-365716] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 10:51:29.818749  227977 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 10:51:29.818849  227977 notify.go:220] Checking for updates...
	I1216 10:51:29.822615  227977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:51:29.823880  227977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 10:51:29.825253  227977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 10:51:29.826465  227977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 10:51:29.827673  227977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 10:51:29.829209  227977 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:51:29.829640  227977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:51:29.829734  227977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:51:29.846066  227977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42151
	I1216 10:51:29.846725  227977 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:51:29.847401  227977 main.go:141] libmachine: Using API Version  1
	I1216 10:51:29.847425  227977 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:51:29.847891  227977 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:51:29.848098  227977 main.go:141] libmachine: (functional-365716) Calling .DriverName
	I1216 10:51:29.848383  227977 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:51:29.848838  227977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:51:29.848896  227977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:51:29.864707  227977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41567
	I1216 10:51:29.865135  227977 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:51:29.865712  227977 main.go:141] libmachine: Using API Version  1
	I1216 10:51:29.865750  227977 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:51:29.866157  227977 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:51:29.866402  227977 main.go:141] libmachine: (functional-365716) Calling .DriverName
	I1216 10:51:29.906271  227977 out.go:177] * Using the kvm2 driver based on existing profile
	I1216 10:51:29.907648  227977 start.go:297] selected driver: kvm2
	I1216 10:51:29.907666  227977 start.go:901] validating driver "kvm2" against &{Name:functional-365716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-365716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.192 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:51:29.907787  227977 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 10:51:29.910097  227977 out.go:201] 
	W1216 10:51:29.911442  227977 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 10:51:29.912828  227977 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365716 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-365716 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-365716 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (162.468544ms)

                                                
                                                
-- stdout --
	* [functional-365716] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 10:51:18.604370  226414 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:51:18.604506  226414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:51:18.604517  226414 out.go:358] Setting ErrFile to fd 2...
	I1216 10:51:18.604524  226414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:51:18.605007  226414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 10:51:18.605902  226414 out.go:352] Setting JSON to false
	I1216 10:51:18.607281  226414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9226,"bootTime":1734337053,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 10:51:18.607431  226414 start.go:139] virtualization: kvm guest
	I1216 10:51:18.610094  226414 out.go:177] * [functional-365716] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1216 10:51:18.611548  226414 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 10:51:18.611593  226414 notify.go:220] Checking for updates...
	I1216 10:51:18.614257  226414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 10:51:18.615709  226414 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 10:51:18.617060  226414 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 10:51:18.618308  226414 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 10:51:18.619577  226414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 10:51:18.621226  226414 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:51:18.621617  226414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:51:18.621695  226414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:51:18.637654  226414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38371
	I1216 10:51:18.638308  226414 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:51:18.638967  226414 main.go:141] libmachine: Using API Version  1
	I1216 10:51:18.638991  226414 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:51:18.639478  226414 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:51:18.639687  226414 main.go:141] libmachine: (functional-365716) Calling .DriverName
	I1216 10:51:18.639968  226414 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 10:51:18.640276  226414 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:51:18.640316  226414 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:51:18.656126  226414 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I1216 10:51:18.656572  226414 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:51:18.657085  226414 main.go:141] libmachine: Using API Version  1
	I1216 10:51:18.657113  226414 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:51:18.657418  226414 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:51:18.657593  226414 main.go:141] libmachine: (functional-365716) Calling .DriverName
	I1216 10:51:18.695758  226414 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1216 10:51:18.697130  226414 start.go:297] selected driver: kvm2
	I1216 10:51:18.697168  226414 start.go:901] validating driver "kvm2" against &{Name:functional-365716 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-365716 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.192 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 10:51:18.697349  226414 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 10:51:18.700013  226414 out.go:201] 
	W1216 10:51:18.701420  226414 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 10:51:18.702723  226414 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-365716 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-365716 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4ds2b" [e3c0837a-5260-4dbc-b79a-1799098f199b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4ds2b" [e3c0837a-5260-4dbc-b79a-1799098f199b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003954114s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.192:32000
functional_test.go:1675: http://192.168.39.192:32000: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-4ds2b

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.192:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.192:32000
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f93ac125-c92e-4755-8ed3-6ddb6ea53540] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003437544s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-365716 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-365716 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-365716 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-365716 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2c5e1d4a-4cf5-43bc-8946-4047d56d695d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2c5e1d4a-4cf5-43bc-8946-4047d56d695d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004422234s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-365716 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-365716 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-365716 delete -f testdata/storage-provisioner/pod.yaml: (1.82304974s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-365716 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [03c2167f-7b5f-422d-b570-b852528d4949] Pending
helpers_test.go:344: "sp-pod" [03c2167f-7b5f-422d-b570-b852528d4949] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [03c2167f-7b5f-422d-b570-b852528d4949] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.106155146s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-365716 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh -n functional-365716 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 cp functional-365716:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1712085180/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh -n functional-365716 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh -n functional-365716 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-365716 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-klw4f" [3a202acb-07eb-4779-8624-c7834d8c9155] Pending
helpers_test.go:344: "mysql-6cdb49bbb-klw4f" [3a202acb-07eb-4779-8624-c7834d8c9155] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-klw4f" [3a202acb-07eb-4779-8624-c7834d8c9155] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.004024096s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-365716 exec mysql-6cdb49bbb-klw4f -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-365716 exec mysql-6cdb49bbb-klw4f -- mysql -ppassword -e "show databases;": exit status 1 (154.934249ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 10:51:46.409772  217519 retry.go:31] will retry after 1.452417013s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-365716 exec mysql-6cdb49bbb-klw4f -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-365716 exec mysql-6cdb49bbb-klw4f -- mysql -ppassword -e "show databases;": exit status 1 (191.959087ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 10:51:48.054704  217519 retry.go:31] will retry after 1.096588633s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-365716 exec mysql-6cdb49bbb-klw4f -- mysql -ppassword -e "show databases;"
2024/12/16 10:51:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (25.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/217519/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "sudo cat /etc/test/nested/copy/217519/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/217519.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "sudo cat /etc/ssl/certs/217519.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/217519.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "sudo cat /usr/share/ca-certificates/217519.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2175192.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "sudo cat /etc/ssl/certs/2175192.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2175192.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "sudo cat /usr/share/ca-certificates/2175192.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-365716 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365716 ssh "sudo systemctl is-active docker": exit status 1 (201.17361ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365716 ssh "sudo systemctl is-active containerd": exit status 1 (194.628574ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-365716 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-365716 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-365716 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 225984: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-365716 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "351.549155ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "63.726133ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-365716 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-365716 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7501fd79-bddf-45ea-8e5b-464b38ab301b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7501fd79-bddf-45ea-8e5b-464b38ab301b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.004457136s
I1216 10:51:20.387234  217519 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "292.543294ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "53.975051ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-365716 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-365716 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-plxkj" [59defc3f-8f66-4193-8f0b-afa488d378cf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-plxkj" [59defc3f-8f66-4193-8f0b-afa488d378cf] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.003577817s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-365716 /tmp/TestFunctionalparallelMountCmdany-port2310286504/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1734346278711519180" to /tmp/TestFunctionalparallelMountCmdany-port2310286504/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1734346278711519180" to /tmp/TestFunctionalparallelMountCmdany-port2310286504/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1734346278711519180" to /tmp/TestFunctionalparallelMountCmdany-port2310286504/001/test-1734346278711519180
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365716 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (265.866144ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 10:51:18.977743  217519 retry.go:31] will retry after 727.984729ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 10:51 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 10:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 10:51 test-1734346278711519180
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh cat /mount-9p/test-1734346278711519180
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-365716 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [014f5ad1-e63b-4afa-9728-51494ea4058e] Pending
helpers_test.go:344: "busybox-mount" [014f5ad1-e63b-4afa-9728-51494ea4058e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [014f5ad1-e63b-4afa-9728-51494ea4058e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [014f5ad1-e63b-4afa-9728-51494ea4058e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.0045419s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-365716 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365716 /tmp/TestFunctionalparallelMountCmdany-port2310286504/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-365716 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.152.249 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-365716 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 service list -o json
functional_test.go:1494: Took "428.076463ms" to run "out/minikube-linux-amd64 -p functional-365716 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.192:31432
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-365716 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-365716
localhost/kicbase/echo-server:functional-365716
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-365716 image ls --format short --alsologtostderr:
I1216 10:51:40.617902  228509 out.go:345] Setting OutFile to fd 1 ...
I1216 10:51:40.618183  228509 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:51:40.618195  228509 out.go:358] Setting ErrFile to fd 2...
I1216 10:51:40.618199  228509 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:51:40.618398  228509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
I1216 10:51:40.618982  228509 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:51:40.619091  228509 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:51:40.619434  228509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 10:51:40.619477  228509 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 10:51:40.636221  228509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37047
I1216 10:51:40.636798  228509 main.go:141] libmachine: () Calling .GetVersion
I1216 10:51:40.637435  228509 main.go:141] libmachine: Using API Version  1
I1216 10:51:40.637462  228509 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 10:51:40.637897  228509 main.go:141] libmachine: () Calling .GetMachineName
I1216 10:51:40.638101  228509 main.go:141] libmachine: (functional-365716) Calling .GetState
I1216 10:51:40.639900  228509 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 10:51:40.639947  228509 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 10:51:40.656620  228509 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45527
I1216 10:51:40.657182  228509 main.go:141] libmachine: () Calling .GetVersion
I1216 10:51:40.657820  228509 main.go:141] libmachine: Using API Version  1
I1216 10:51:40.657850  228509 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 10:51:40.658167  228509 main.go:141] libmachine: () Calling .GetMachineName
I1216 10:51:40.658373  228509 main.go:141] libmachine: (functional-365716) Calling .DriverName
I1216 10:51:40.658583  228509 ssh_runner.go:195] Run: systemctl --version
I1216 10:51:40.658607  228509 main.go:141] libmachine: (functional-365716) Calling .GetSSHHostname
I1216 10:51:40.661913  228509 main.go:141] libmachine: (functional-365716) DBG | domain functional-365716 has defined MAC address 52:54:00:63:87:a4 in network mk-functional-365716
I1216 10:51:40.662316  228509 main.go:141] libmachine: (functional-365716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:87:a4", ip: ""} in network mk-functional-365716: {Iface:virbr1 ExpiryTime:2024-12-16 11:43:09 +0000 UTC Type:0 Mac:52:54:00:63:87:a4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:functional-365716 Clientid:01:52:54:00:63:87:a4}
I1216 10:51:40.662337  228509 main.go:141] libmachine: (functional-365716) DBG | domain functional-365716 has defined IP address 192.168.39.192 and MAC address 52:54:00:63:87:a4 in network mk-functional-365716
I1216 10:51:40.662463  228509 main.go:141] libmachine: (functional-365716) Calling .GetSSHPort
I1216 10:51:40.662661  228509 main.go:141] libmachine: (functional-365716) Calling .GetSSHKeyPath
I1216 10:51:40.662800  228509 main.go:141] libmachine: (functional-365716) Calling .GetSSHUsername
I1216 10:51:40.662954  228509 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/functional-365716/id_rsa Username:docker}
I1216 10:51:40.752886  228509 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 10:51:40.805524  228509 main.go:141] libmachine: Making call to close driver server
I1216 10:51:40.805553  228509 main.go:141] libmachine: (functional-365716) Calling .Close
I1216 10:51:40.805857  228509 main.go:141] libmachine: Successfully made call to close driver server
I1216 10:51:40.805878  228509 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 10:51:40.805887  228509 main.go:141] libmachine: Making call to close driver server
I1216 10:51:40.805894  228509 main.go:141] libmachine: (functional-365716) Calling .Close
I1216 10:51:40.805894  228509 main.go:141] libmachine: (functional-365716) DBG | Closing plugin on server side
I1216 10:51:40.806143  228509 main.go:141] libmachine: (functional-365716) DBG | Closing plugin on server side
I1216 10:51:40.806166  228509 main.go:141] libmachine: Successfully made call to close driver server
I1216 10:51:40.806183  228509 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-365716 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-365716  | c2f30ce579746 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-365716  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/library/nginx                 | alpine             | 91ca84b4f5779 | 54MB   |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| localhost/my-image                      | functional-365716  | 999bd785423cd | 1.47MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-365716 image ls --format table --alsologtostderr:
I1216 10:51:45.331885  228680 out.go:345] Setting OutFile to fd 1 ...
I1216 10:51:45.332169  228680 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:51:45.332182  228680 out.go:358] Setting ErrFile to fd 2...
I1216 10:51:45.332186  228680 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:51:45.332414  228680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
I1216 10:51:45.333106  228680 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:51:45.333228  228680 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:51:45.334138  228680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 10:51:45.334206  228680 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 10:51:45.350239  228680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39603
I1216 10:51:45.350843  228680 main.go:141] libmachine: () Calling .GetVersion
I1216 10:51:45.351428  228680 main.go:141] libmachine: Using API Version  1
I1216 10:51:45.351446  228680 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 10:51:45.351913  228680 main.go:141] libmachine: () Calling .GetMachineName
I1216 10:51:45.352136  228680 main.go:141] libmachine: (functional-365716) Calling .GetState
I1216 10:51:45.354092  228680 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 10:51:45.354140  228680 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 10:51:45.370131  228680 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
I1216 10:51:45.370671  228680 main.go:141] libmachine: () Calling .GetVersion
I1216 10:51:45.371307  228680 main.go:141] libmachine: Using API Version  1
I1216 10:51:45.371341  228680 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 10:51:45.371731  228680 main.go:141] libmachine: () Calling .GetMachineName
I1216 10:51:45.371940  228680 main.go:141] libmachine: (functional-365716) Calling .DriverName
I1216 10:51:45.372150  228680 ssh_runner.go:195] Run: systemctl --version
I1216 10:51:45.372180  228680 main.go:141] libmachine: (functional-365716) Calling .GetSSHHostname
I1216 10:51:45.375146  228680 main.go:141] libmachine: (functional-365716) DBG | domain functional-365716 has defined MAC address 52:54:00:63:87:a4 in network mk-functional-365716
I1216 10:51:45.375618  228680 main.go:141] libmachine: (functional-365716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:87:a4", ip: ""} in network mk-functional-365716: {Iface:virbr1 ExpiryTime:2024-12-16 11:43:09 +0000 UTC Type:0 Mac:52:54:00:63:87:a4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:functional-365716 Clientid:01:52:54:00:63:87:a4}
I1216 10:51:45.375650  228680 main.go:141] libmachine: (functional-365716) DBG | domain functional-365716 has defined IP address 192.168.39.192 and MAC address 52:54:00:63:87:a4 in network mk-functional-365716
I1216 10:51:45.375791  228680 main.go:141] libmachine: (functional-365716) Calling .GetSSHPort
I1216 10:51:45.376001  228680 main.go:141] libmachine: (functional-365716) Calling .GetSSHKeyPath
I1216 10:51:45.376177  228680 main.go:141] libmachine: (functional-365716) Calling .GetSSHUsername
I1216 10:51:45.376337  228680 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/functional-365716/id_rsa Username:docker}
I1216 10:51:45.481027  228680 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 10:51:45.542201  228680 main.go:141] libmachine: Making call to close driver server
I1216 10:51:45.542223  228680 main.go:141] libmachine: (functional-365716) Calling .Close
I1216 10:51:45.542554  228680 main.go:141] libmachine: Successfully made call to close driver server
I1216 10:51:45.542576  228680 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 10:51:45.542585  228680 main.go:141] libmachine: Making call to close driver server
I1216 10:51:45.542584  228680 main.go:141] libmachine: (functional-365716) DBG | Closing plugin on server side
I1216 10:51:45.542593  228680 main.go:141] libmachine: (functional-365716) Calling .Close
I1216 10:51:45.542930  228680 main.go:141] libmachine: (functional-365716) DBG | Closing plugin on server side
I1216 10:51:45.543022  228680 main.go:141] libmachine: Successfully made call to close driver server
I1216 10:51:45.543057  228680 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-365716 image ls --format json --alsologtostderr:
[{"id":"999bd785423cdbdb1eb8f2d3880b942179749501aec21cef3d63fe9316682a3b","repoDigests":["localhost/my-image@sha256:5923dac72d354e4dae633c86513864927f4fc6db3c25af866e274f7544306ed8"],"repoTags":["localhost/my-image:functional-365716"],"size":"1468599"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8f
ca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-365716"],"size":"4943877"},{"id":"2d61d6585d59707b6a563c63ea3da818db059006b02b93aaa87ac17f1b88569a","repoDigests":["docker.io/library/9f1d20e246b5a56db904c30cb6bb9109a600e914d4def7c9d20175d1e037f52a-tmp@sha256:3c3082d5b4aa3fd24a89b36d20e651777391a4c7c5790ba224b03c8ee1f304a3"],"repoTags":
[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4","docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53958631"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.
io/library/nginx:latest"],"size":"195919252"},{"id":"c2f30ce57974647d2901c0c69da8ab69537df2db1ddbe4fc0a45ecd80221691b","repoDigests":["localhost/minikube-local-cache-test@sha256:687d4b7052f5eb9c8a05a2c596a0b7502c00e79a1bb05e2d153b0d5a939df7b3"],"repoTags":["localhost/minikube-local-cache-test:functional-365716"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindes
t/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"bea
e173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8
006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4
-glibc"],"size":"4631262"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-365716 image ls --format json --alsologtostderr:
I1216 10:51:45.041811  228656 out.go:345] Setting OutFile to fd 1 ...
I1216 10:51:45.042071  228656 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:51:45.042081  228656 out.go:358] Setting ErrFile to fd 2...
I1216 10:51:45.042103  228656 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:51:45.042329  228656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
I1216 10:51:45.042985  228656 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:51:45.043106  228656 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:51:45.043470  228656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 10:51:45.043527  228656 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 10:51:45.058917  228656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
I1216 10:51:45.059482  228656 main.go:141] libmachine: () Calling .GetVersion
I1216 10:51:45.060086  228656 main.go:141] libmachine: Using API Version  1
I1216 10:51:45.060112  228656 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 10:51:45.060448  228656 main.go:141] libmachine: () Calling .GetMachineName
I1216 10:51:45.060685  228656 main.go:141] libmachine: (functional-365716) Calling .GetState
I1216 10:51:45.062815  228656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 10:51:45.062862  228656 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 10:51:45.078177  228656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
I1216 10:51:45.078752  228656 main.go:141] libmachine: () Calling .GetVersion
I1216 10:51:45.079192  228656 main.go:141] libmachine: Using API Version  1
I1216 10:51:45.079216  228656 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 10:51:45.079616  228656 main.go:141] libmachine: () Calling .GetMachineName
I1216 10:51:45.079857  228656 main.go:141] libmachine: (functional-365716) Calling .DriverName
I1216 10:51:45.080086  228656 ssh_runner.go:195] Run: systemctl --version
I1216 10:51:45.080116  228656 main.go:141] libmachine: (functional-365716) Calling .GetSSHHostname
I1216 10:51:45.083061  228656 main.go:141] libmachine: (functional-365716) DBG | domain functional-365716 has defined MAC address 52:54:00:63:87:a4 in network mk-functional-365716
I1216 10:51:45.083455  228656 main.go:141] libmachine: (functional-365716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:87:a4", ip: ""} in network mk-functional-365716: {Iface:virbr1 ExpiryTime:2024-12-16 11:43:09 +0000 UTC Type:0 Mac:52:54:00:63:87:a4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:functional-365716 Clientid:01:52:54:00:63:87:a4}
I1216 10:51:45.083485  228656 main.go:141] libmachine: (functional-365716) DBG | domain functional-365716 has defined IP address 192.168.39.192 and MAC address 52:54:00:63:87:a4 in network mk-functional-365716
I1216 10:51:45.083759  228656 main.go:141] libmachine: (functional-365716) Calling .GetSSHPort
I1216 10:51:45.083944  228656 main.go:141] libmachine: (functional-365716) Calling .GetSSHKeyPath
I1216 10:51:45.084123  228656 main.go:141] libmachine: (functional-365716) Calling .GetSSHUsername
I1216 10:51:45.084300  228656 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/functional-365716/id_rsa Username:docker}
I1216 10:51:45.190599  228656 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 10:51:45.272544  228656 main.go:141] libmachine: Making call to close driver server
I1216 10:51:45.272562  228656 main.go:141] libmachine: (functional-365716) Calling .Close
I1216 10:51:45.272850  228656 main.go:141] libmachine: Successfully made call to close driver server
I1216 10:51:45.272868  228656 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 10:51:45.272878  228656 main.go:141] libmachine: Making call to close driver server
I1216 10:51:45.272891  228656 main.go:141] libmachine: (functional-365716) Calling .Close
I1216 10:51:45.273175  228656 main.go:141] libmachine: Successfully made call to close driver server
I1216 10:51:45.273252  228656 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 10:51:45.273278  228656 main.go:141] libmachine: (functional-365716) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-365716 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
- docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371
repoTags:
- docker.io/library/nginx:alpine
size: "53958631"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: c2f30ce57974647d2901c0c69da8ab69537df2db1ddbe4fc0a45ecd80221691b
repoDigests:
- localhost/minikube-local-cache-test@sha256:687d4b7052f5eb9c8a05a2c596a0b7502c00e79a1bb05e2d153b0d5a939df7b3
repoTags:
- localhost/minikube-local-cache-test:functional-365716
size: "3330"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-365716
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-365716 image ls --format yaml --alsologtostderr:
I1216 10:51:40.861709  228533 out.go:345] Setting OutFile to fd 1 ...
I1216 10:51:40.861845  228533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:51:40.861856  228533 out.go:358] Setting ErrFile to fd 2...
I1216 10:51:40.861862  228533 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:51:40.862048  228533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
I1216 10:51:40.862689  228533 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:51:40.862807  228533 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:51:40.863175  228533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 10:51:40.863230  228533 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 10:51:40.878054  228533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
I1216 10:51:40.878553  228533 main.go:141] libmachine: () Calling .GetVersion
I1216 10:51:40.879249  228533 main.go:141] libmachine: Using API Version  1
I1216 10:51:40.879278  228533 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 10:51:40.879637  228533 main.go:141] libmachine: () Calling .GetMachineName
I1216 10:51:40.879901  228533 main.go:141] libmachine: (functional-365716) Calling .GetState
I1216 10:51:40.881901  228533 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 10:51:40.881947  228533 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 10:51:40.896356  228533 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45145
I1216 10:51:40.896780  228533 main.go:141] libmachine: () Calling .GetVersion
I1216 10:51:40.897306  228533 main.go:141] libmachine: Using API Version  1
I1216 10:51:40.897335  228533 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 10:51:40.897640  228533 main.go:141] libmachine: () Calling .GetMachineName
I1216 10:51:40.897838  228533 main.go:141] libmachine: (functional-365716) Calling .DriverName
I1216 10:51:40.898046  228533 ssh_runner.go:195] Run: systemctl --version
I1216 10:51:40.898073  228533 main.go:141] libmachine: (functional-365716) Calling .GetSSHHostname
I1216 10:51:40.900677  228533 main.go:141] libmachine: (functional-365716) DBG | domain functional-365716 has defined MAC address 52:54:00:63:87:a4 in network mk-functional-365716
I1216 10:51:40.901074  228533 main.go:141] libmachine: (functional-365716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:87:a4", ip: ""} in network mk-functional-365716: {Iface:virbr1 ExpiryTime:2024-12-16 11:43:09 +0000 UTC Type:0 Mac:52:54:00:63:87:a4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:functional-365716 Clientid:01:52:54:00:63:87:a4}
I1216 10:51:40.901110  228533 main.go:141] libmachine: (functional-365716) DBG | domain functional-365716 has defined IP address 192.168.39.192 and MAC address 52:54:00:63:87:a4 in network mk-functional-365716
I1216 10:51:40.901212  228533 main.go:141] libmachine: (functional-365716) Calling .GetSSHPort
I1216 10:51:40.901384  228533 main.go:141] libmachine: (functional-365716) Calling .GetSSHKeyPath
I1216 10:51:40.901540  228533 main.go:141] libmachine: (functional-365716) Calling .GetSSHUsername
I1216 10:51:40.901660  228533 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/functional-365716/id_rsa Username:docker}
I1216 10:51:40.977819  228533 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 10:51:41.032571  228533 main.go:141] libmachine: Making call to close driver server
I1216 10:51:41.032584  228533 main.go:141] libmachine: (functional-365716) Calling .Close
I1216 10:51:41.032895  228533 main.go:141] libmachine: Successfully made call to close driver server
I1216 10:51:41.032923  228533 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 10:51:41.032943  228533 main.go:141] libmachine: Making call to close driver server
I1216 10:51:41.032969  228533 main.go:141] libmachine: (functional-365716) Calling .Close
I1216 10:51:41.032895  228533 main.go:141] libmachine: (functional-365716) DBG | Closing plugin on server side
I1216 10:51:41.033207  228533 main.go:141] libmachine: Successfully made call to close driver server
I1216 10:51:41.033265  228533 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 10:51:41.033369  228533 main.go:141] libmachine: (functional-365716) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365716 ssh pgrep buildkitd: exit status 1 (212.898524ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image build -t localhost/my-image:functional-365716 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 image build -t localhost/my-image:functional-365716 testdata/build --alsologtostderr: (3.414215836s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-365716 image build -t localhost/my-image:functional-365716 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2d61d6585d5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-365716
--> 999bd785423
Successfully tagged localhost/my-image:functional-365716
999bd785423cdbdb1eb8f2d3880b942179749501aec21cef3d63fe9316682a3b
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-365716 image build -t localhost/my-image:functional-365716 testdata/build --alsologtostderr:
I1216 10:51:41.297668  228588 out.go:345] Setting OutFile to fd 1 ...
I1216 10:51:41.297825  228588 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:51:41.297837  228588 out.go:358] Setting ErrFile to fd 2...
I1216 10:51:41.297842  228588 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 10:51:41.298047  228588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
I1216 10:51:41.298710  228588 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:51:41.299280  228588 config.go:182] Loaded profile config "functional-365716": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 10:51:41.299673  228588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 10:51:41.299723  228588 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 10:51:41.315146  228588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34601
I1216 10:51:41.315708  228588 main.go:141] libmachine: () Calling .GetVersion
I1216 10:51:41.316335  228588 main.go:141] libmachine: Using API Version  1
I1216 10:51:41.316357  228588 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 10:51:41.316705  228588 main.go:141] libmachine: () Calling .GetMachineName
I1216 10:51:41.316884  228588 main.go:141] libmachine: (functional-365716) Calling .GetState
I1216 10:51:41.318661  228588 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 10:51:41.318710  228588 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 10:51:41.333654  228588 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34299
I1216 10:51:41.334139  228588 main.go:141] libmachine: () Calling .GetVersion
I1216 10:51:41.334695  228588 main.go:141] libmachine: Using API Version  1
I1216 10:51:41.334724  228588 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 10:51:41.335036  228588 main.go:141] libmachine: () Calling .GetMachineName
I1216 10:51:41.335213  228588 main.go:141] libmachine: (functional-365716) Calling .DriverName
I1216 10:51:41.335430  228588 ssh_runner.go:195] Run: systemctl --version
I1216 10:51:41.335461  228588 main.go:141] libmachine: (functional-365716) Calling .GetSSHHostname
I1216 10:51:41.338454  228588 main.go:141] libmachine: (functional-365716) DBG | domain functional-365716 has defined MAC address 52:54:00:63:87:a4 in network mk-functional-365716
I1216 10:51:41.338837  228588 main.go:141] libmachine: (functional-365716) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:87:a4", ip: ""} in network mk-functional-365716: {Iface:virbr1 ExpiryTime:2024-12-16 11:43:09 +0000 UTC Type:0 Mac:52:54:00:63:87:a4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:functional-365716 Clientid:01:52:54:00:63:87:a4}
I1216 10:51:41.338868  228588 main.go:141] libmachine: (functional-365716) DBG | domain functional-365716 has defined IP address 192.168.39.192 and MAC address 52:54:00:63:87:a4 in network mk-functional-365716
I1216 10:51:41.338994  228588 main.go:141] libmachine: (functional-365716) Calling .GetSSHPort
I1216 10:51:41.339146  228588 main.go:141] libmachine: (functional-365716) Calling .GetSSHKeyPath
I1216 10:51:41.339287  228588 main.go:141] libmachine: (functional-365716) Calling .GetSSHUsername
I1216 10:51:41.339402  228588 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/functional-365716/id_rsa Username:docker}
I1216 10:51:41.415295  228588 build_images.go:161] Building image from path: /tmp/build.3537357521.tar
I1216 10:51:41.415388  228588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 10:51:41.425984  228588 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3537357521.tar
I1216 10:51:41.430093  228588 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3537357521.tar: stat -c "%s %y" /var/lib/minikube/build/build.3537357521.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3537357521.tar': No such file or directory
I1216 10:51:41.430136  228588 ssh_runner.go:362] scp /tmp/build.3537357521.tar --> /var/lib/minikube/build/build.3537357521.tar (3072 bytes)
I1216 10:51:41.454264  228588 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3537357521
I1216 10:51:41.464737  228588 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3537357521 -xf /var/lib/minikube/build/build.3537357521.tar
I1216 10:51:41.473371  228588 crio.go:315] Building image: /var/lib/minikube/build/build.3537357521
I1216 10:51:41.473434  228588 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-365716 /var/lib/minikube/build/build.3537357521 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1216 10:51:44.633364  228588 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-365716 /var/lib/minikube/build/build.3537357521 --cgroup-manager=cgroupfs: (3.159898946s)
I1216 10:51:44.633463  228588 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3537357521
I1216 10:51:44.645105  228588 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3537357521.tar
I1216 10:51:44.659573  228588 build_images.go:217] Built localhost/my-image:functional-365716 from /tmp/build.3537357521.tar
I1216 10:51:44.659613  228588 build_images.go:133] succeeded building to: functional-365716
I1216 10:51:44.659621  228588 build_images.go:134] failed building to: 
I1216 10:51:44.659652  228588 main.go:141] libmachine: Making call to close driver server
I1216 10:51:44.659664  228588 main.go:141] libmachine: (functional-365716) Calling .Close
I1216 10:51:44.660000  228588 main.go:141] libmachine: Successfully made call to close driver server
I1216 10:51:44.660022  228588 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 10:51:44.660034  228588 main.go:141] libmachine: Making call to close driver server
I1216 10:51:44.660044  228588 main.go:141] libmachine: (functional-365716) Calling .Close
I1216 10:51:44.660338  228588 main.go:141] libmachine: Successfully made call to close driver server
I1216 10:51:44.660350  228588 main.go:141] libmachine: (functional-365716) DBG | Closing plugin on server side
I1216 10:51:44.660353  228588 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.510084155s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-365716
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.192:31432
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image load --daemon kicbase/echo-server:functional-365716 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 image load --daemon kicbase/echo-server:functional-365716 --alsologtostderr: (1.253427528s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image load --daemon kicbase/echo-server:functional-365716 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-365716
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image load --daemon kicbase/echo-server:functional-365716 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-365716 /tmp/TestFunctionalparallelMountCmdspecific-port1933179023/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365716 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (197.328049ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 10:51:26.642194  217519 retry.go:31] will retry after 477.997645ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365716 /tmp/TestFunctionalparallelMountCmdspecific-port1933179023/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365716 ssh "sudo umount -f /mount-9p": exit status 1 (243.644386ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-365716 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365716 /tmp/TestFunctionalparallelMountCmdspecific-port1933179023/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image save kicbase/echo-server:functional-365716 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-365716 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3787956137/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-365716 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3787956137/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-365716 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3787956137/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-365716 ssh "findmnt -T" /mount1: exit status 1 (291.94663ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 10:51:28.565223  217519 retry.go:31] will retry after 396.960268ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-365716 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365716 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3787956137/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365716 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3787956137/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-365716 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3787956137/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image rm kicbase/echo-server:functional-365716 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 image rm kicbase/echo-server:functional-365716 --alsologtostderr: (1.617435456s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-365716
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-365716 image save --daemon kicbase/echo-server:functional-365716 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-365716 image save --daemon kicbase/echo-server:functional-365716 --alsologtostderr: (1.674107614s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-365716
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-365716
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-365716
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-365716
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-943381 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1216 10:53:48.521850  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-943381 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m17.745314805s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (198.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-943381 -- rollout status deployment/busybox: (3.363182104s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-64p7q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-6jlds -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-dw7sh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-64p7q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-6jlds -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-dw7sh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-64p7q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-6jlds -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-dw7sh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-64p7q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-64p7q -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-6jlds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-6jlds -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-dw7sh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-943381 -- exec busybox-7dff88458-dw7sh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-943381 -v=7 --alsologtostderr
E1216 10:56:06.844057  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:56:06.850603  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:56:06.862048  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:56:06.883541  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:56:06.925015  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:56:07.006500  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:56:07.168205  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:56:07.490501  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:56:08.131935  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:56:09.413582  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-943381 -v=7 --alsologtostderr: (53.443084969s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 status -v=7 --alsologtostderr
E1216 10:56:11.975784  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-943381 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp testdata/cp-test.txt ha-943381:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile418221772/001/cp-test_ha-943381.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381:/home/docker/cp-test.txt ha-943381-m02:/home/docker/cp-test_ha-943381_ha-943381-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m02 "sudo cat /home/docker/cp-test_ha-943381_ha-943381-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381:/home/docker/cp-test.txt ha-943381-m03:/home/docker/cp-test_ha-943381_ha-943381-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m03 "sudo cat /home/docker/cp-test_ha-943381_ha-943381-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381:/home/docker/cp-test.txt ha-943381-m04:/home/docker/cp-test_ha-943381_ha-943381-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381 "sudo cat /home/docker/cp-test.txt"
E1216 10:56:17.097561  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m04 "sudo cat /home/docker/cp-test_ha-943381_ha-943381-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp testdata/cp-test.txt ha-943381-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile418221772/001/cp-test_ha-943381-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m02:/home/docker/cp-test.txt ha-943381:/home/docker/cp-test_ha-943381-m02_ha-943381.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381 "sudo cat /home/docker/cp-test_ha-943381-m02_ha-943381.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m02:/home/docker/cp-test.txt ha-943381-m03:/home/docker/cp-test_ha-943381-m02_ha-943381-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m03 "sudo cat /home/docker/cp-test_ha-943381-m02_ha-943381-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m02:/home/docker/cp-test.txt ha-943381-m04:/home/docker/cp-test_ha-943381-m02_ha-943381-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m04 "sudo cat /home/docker/cp-test_ha-943381-m02_ha-943381-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp testdata/cp-test.txt ha-943381-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile418221772/001/cp-test_ha-943381-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m03:/home/docker/cp-test.txt ha-943381:/home/docker/cp-test_ha-943381-m03_ha-943381.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381 "sudo cat /home/docker/cp-test_ha-943381-m03_ha-943381.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m03:/home/docker/cp-test.txt ha-943381-m02:/home/docker/cp-test_ha-943381-m03_ha-943381-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m02 "sudo cat /home/docker/cp-test_ha-943381-m03_ha-943381-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m03:/home/docker/cp-test.txt ha-943381-m04:/home/docker/cp-test_ha-943381-m03_ha-943381-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m04 "sudo cat /home/docker/cp-test_ha-943381-m03_ha-943381-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp testdata/cp-test.txt ha-943381-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile418221772/001/cp-test_ha-943381-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m04:/home/docker/cp-test.txt ha-943381:/home/docker/cp-test_ha-943381-m04_ha-943381.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381 "sudo cat /home/docker/cp-test_ha-943381-m04_ha-943381.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m04:/home/docker/cp-test.txt ha-943381-m02:/home/docker/cp-test_ha-943381-m04_ha-943381-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m02 "sudo cat /home/docker/cp-test_ha-943381-m04_ha-943381-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 cp ha-943381-m04:/home/docker/cp-test.txt ha-943381-m03:/home/docker/cp-test_ha-943381-m04_ha-943381-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 ssh -n ha-943381-m03 "sudo cat /home/docker/cp-test_ha-943381-m04_ha-943381-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 node stop m02 -v=7 --alsologtostderr
E1216 10:56:27.338963  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:56:47.821166  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 10:57:28.782677  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-943381 node stop m02 -v=7 --alsologtostderr: (1m30.786179636s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-943381 status -v=7 --alsologtostderr: exit status 7 (632.513683ms)

                                                
                                                
-- stdout --
	ha-943381
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-943381-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-943381-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-943381-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 10:57:57.364588  233379 out.go:345] Setting OutFile to fd 1 ...
	I1216 10:57:57.364717  233379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:57:57.364727  233379 out.go:358] Setting ErrFile to fd 2...
	I1216 10:57:57.364733  233379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 10:57:57.364983  233379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 10:57:57.365209  233379 out.go:352] Setting JSON to false
	I1216 10:57:57.365241  233379 mustload.go:65] Loading cluster: ha-943381
	I1216 10:57:57.365348  233379 notify.go:220] Checking for updates...
	I1216 10:57:57.365760  233379 config.go:182] Loaded profile config "ha-943381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 10:57:57.365787  233379 status.go:174] checking status of ha-943381 ...
	I1216 10:57:57.366241  233379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:57:57.366299  233379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:57:57.386711  233379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45449
	I1216 10:57:57.387219  233379 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:57:57.387947  233379 main.go:141] libmachine: Using API Version  1
	I1216 10:57:57.387983  233379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:57:57.388329  233379 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:57:57.388538  233379 main.go:141] libmachine: (ha-943381) Calling .GetState
	I1216 10:57:57.390289  233379 status.go:371] ha-943381 host status = "Running" (err=<nil>)
	I1216 10:57:57.390307  233379 host.go:66] Checking if "ha-943381" exists ...
	I1216 10:57:57.390634  233379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:57:57.390690  233379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:57:57.405350  233379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42805
	I1216 10:57:57.405827  233379 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:57:57.406307  233379 main.go:141] libmachine: Using API Version  1
	I1216 10:57:57.406329  233379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:57:57.406647  233379 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:57:57.406854  233379 main.go:141] libmachine: (ha-943381) Calling .GetIP
	I1216 10:57:57.409786  233379 main.go:141] libmachine: (ha-943381) DBG | domain ha-943381 has defined MAC address 52:54:00:58:f1:4c in network mk-ha-943381
	I1216 10:57:57.410231  233379 main.go:141] libmachine: (ha-943381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:f1:4c", ip: ""} in network mk-ha-943381: {Iface:virbr1 ExpiryTime:2024-12-16 11:52:07 +0000 UTC Type:0 Mac:52:54:00:58:f1:4c Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-943381 Clientid:01:52:54:00:58:f1:4c}
	I1216 10:57:57.410267  233379 main.go:141] libmachine: (ha-943381) DBG | domain ha-943381 has defined IP address 192.168.39.99 and MAC address 52:54:00:58:f1:4c in network mk-ha-943381
	I1216 10:57:57.410409  233379 host.go:66] Checking if "ha-943381" exists ...
	I1216 10:57:57.410741  233379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:57:57.410791  233379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:57:57.426625  233379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38773
	I1216 10:57:57.427186  233379 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:57:57.427678  233379 main.go:141] libmachine: Using API Version  1
	I1216 10:57:57.427701  233379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:57:57.428123  233379 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:57:57.428325  233379 main.go:141] libmachine: (ha-943381) Calling .DriverName
	I1216 10:57:57.428528  233379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 10:57:57.428576  233379 main.go:141] libmachine: (ha-943381) Calling .GetSSHHostname
	I1216 10:57:57.431794  233379 main.go:141] libmachine: (ha-943381) DBG | domain ha-943381 has defined MAC address 52:54:00:58:f1:4c in network mk-ha-943381
	I1216 10:57:57.432327  233379 main.go:141] libmachine: (ha-943381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:f1:4c", ip: ""} in network mk-ha-943381: {Iface:virbr1 ExpiryTime:2024-12-16 11:52:07 +0000 UTC Type:0 Mac:52:54:00:58:f1:4c Iaid: IPaddr:192.168.39.99 Prefix:24 Hostname:ha-943381 Clientid:01:52:54:00:58:f1:4c}
	I1216 10:57:57.432356  233379 main.go:141] libmachine: (ha-943381) DBG | domain ha-943381 has defined IP address 192.168.39.99 and MAC address 52:54:00:58:f1:4c in network mk-ha-943381
	I1216 10:57:57.432586  233379 main.go:141] libmachine: (ha-943381) Calling .GetSSHPort
	I1216 10:57:57.432779  233379 main.go:141] libmachine: (ha-943381) Calling .GetSSHKeyPath
	I1216 10:57:57.432925  233379 main.go:141] libmachine: (ha-943381) Calling .GetSSHUsername
	I1216 10:57:57.433064  233379 sshutil.go:53] new ssh client: &{IP:192.168.39.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/ha-943381/id_rsa Username:docker}
	I1216 10:57:57.516386  233379 ssh_runner.go:195] Run: systemctl --version
	I1216 10:57:57.523363  233379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:57:57.537575  233379 kubeconfig.go:125] found "ha-943381" server: "https://192.168.39.254:8443"
	I1216 10:57:57.537615  233379 api_server.go:166] Checking apiserver status ...
	I1216 10:57:57.537647  233379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 10:57:57.552365  233379 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup
	W1216 10:57:57.562378  233379 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1146/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 10:57:57.562456  233379 ssh_runner.go:195] Run: ls
	I1216 10:57:57.567360  233379 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1216 10:57:57.571624  233379 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1216 10:57:57.571650  233379 status.go:463] ha-943381 apiserver status = Running (err=<nil>)
	I1216 10:57:57.571661  233379 status.go:176] ha-943381 status: &{Name:ha-943381 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 10:57:57.571677  233379 status.go:174] checking status of ha-943381-m02 ...
	I1216 10:57:57.572015  233379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:57:57.572063  233379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:57:57.587092  233379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35565
	I1216 10:57:57.587567  233379 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:57:57.588086  233379 main.go:141] libmachine: Using API Version  1
	I1216 10:57:57.588108  233379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:57:57.588421  233379 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:57:57.588616  233379 main.go:141] libmachine: (ha-943381-m02) Calling .GetState
	I1216 10:57:57.590050  233379 status.go:371] ha-943381-m02 host status = "Stopped" (err=<nil>)
	I1216 10:57:57.590066  233379 status.go:384] host is not running, skipping remaining checks
	I1216 10:57:57.590072  233379 status.go:176] ha-943381-m02 status: &{Name:ha-943381-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 10:57:57.590089  233379 status.go:174] checking status of ha-943381-m03 ...
	I1216 10:57:57.590482  233379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:57:57.590532  233379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:57:57.606006  233379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36597
	I1216 10:57:57.606512  233379 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:57:57.607119  233379 main.go:141] libmachine: Using API Version  1
	I1216 10:57:57.607150  233379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:57:57.607501  233379 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:57:57.607729  233379 main.go:141] libmachine: (ha-943381-m03) Calling .GetState
	I1216 10:57:57.609214  233379 status.go:371] ha-943381-m03 host status = "Running" (err=<nil>)
	I1216 10:57:57.609233  233379 host.go:66] Checking if "ha-943381-m03" exists ...
	I1216 10:57:57.609640  233379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:57:57.609682  233379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:57:57.625162  233379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42395
	I1216 10:57:57.625696  233379 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:57:57.626146  233379 main.go:141] libmachine: Using API Version  1
	I1216 10:57:57.626167  233379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:57:57.626476  233379 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:57:57.626652  233379 main.go:141] libmachine: (ha-943381-m03) Calling .GetIP
	I1216 10:57:57.629373  233379 main.go:141] libmachine: (ha-943381-m03) DBG | domain ha-943381-m03 has defined MAC address 52:54:00:84:f9:1d in network mk-ha-943381
	I1216 10:57:57.629745  233379 main.go:141] libmachine: (ha-943381-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:f9:1d", ip: ""} in network mk-ha-943381: {Iface:virbr1 ExpiryTime:2024-12-16 11:54:07 +0000 UTC Type:0 Mac:52:54:00:84:f9:1d Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-943381-m03 Clientid:01:52:54:00:84:f9:1d}
	I1216 10:57:57.629776  233379 main.go:141] libmachine: (ha-943381-m03) DBG | domain ha-943381-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:84:f9:1d in network mk-ha-943381
	I1216 10:57:57.629920  233379 host.go:66] Checking if "ha-943381-m03" exists ...
	I1216 10:57:57.630240  233379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:57:57.630278  233379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:57:57.645832  233379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46383
	I1216 10:57:57.646380  233379 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:57:57.646960  233379 main.go:141] libmachine: Using API Version  1
	I1216 10:57:57.646980  233379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:57:57.647303  233379 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:57:57.647513  233379 main.go:141] libmachine: (ha-943381-m03) Calling .DriverName
	I1216 10:57:57.647696  233379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 10:57:57.647721  233379 main.go:141] libmachine: (ha-943381-m03) Calling .GetSSHHostname
	I1216 10:57:57.650734  233379 main.go:141] libmachine: (ha-943381-m03) DBG | domain ha-943381-m03 has defined MAC address 52:54:00:84:f9:1d in network mk-ha-943381
	I1216 10:57:57.651197  233379 main.go:141] libmachine: (ha-943381-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:f9:1d", ip: ""} in network mk-ha-943381: {Iface:virbr1 ExpiryTime:2024-12-16 11:54:07 +0000 UTC Type:0 Mac:52:54:00:84:f9:1d Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:ha-943381-m03 Clientid:01:52:54:00:84:f9:1d}
	I1216 10:57:57.651226  233379 main.go:141] libmachine: (ha-943381-m03) DBG | domain ha-943381-m03 has defined IP address 192.168.39.180 and MAC address 52:54:00:84:f9:1d in network mk-ha-943381
	I1216 10:57:57.651392  233379 main.go:141] libmachine: (ha-943381-m03) Calling .GetSSHPort
	I1216 10:57:57.651583  233379 main.go:141] libmachine: (ha-943381-m03) Calling .GetSSHKeyPath
	I1216 10:57:57.651738  233379 main.go:141] libmachine: (ha-943381-m03) Calling .GetSSHUsername
	I1216 10:57:57.651869  233379 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/ha-943381-m03/id_rsa Username:docker}
	I1216 10:57:57.737153  233379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:57:57.751642  233379 kubeconfig.go:125] found "ha-943381" server: "https://192.168.39.254:8443"
	I1216 10:57:57.751671  233379 api_server.go:166] Checking apiserver status ...
	I1216 10:57:57.751705  233379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 10:57:57.764807  233379 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1467/cgroup
	W1216 10:57:57.775851  233379 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1467/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 10:57:57.775907  233379 ssh_runner.go:195] Run: ls
	I1216 10:57:57.780089  233379 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1216 10:57:57.785403  233379 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1216 10:57:57.785428  233379 status.go:463] ha-943381-m03 apiserver status = Running (err=<nil>)
	I1216 10:57:57.785436  233379 status.go:176] ha-943381-m03 status: &{Name:ha-943381-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 10:57:57.785461  233379 status.go:174] checking status of ha-943381-m04 ...
	I1216 10:57:57.785764  233379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:57:57.785807  233379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:57:57.801305  233379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34143
	I1216 10:57:57.801803  233379 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:57:57.802413  233379 main.go:141] libmachine: Using API Version  1
	I1216 10:57:57.802441  233379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:57:57.802822  233379 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:57:57.803045  233379 main.go:141] libmachine: (ha-943381-m04) Calling .GetState
	I1216 10:57:57.804823  233379 status.go:371] ha-943381-m04 host status = "Running" (err=<nil>)
	I1216 10:57:57.804840  233379 host.go:66] Checking if "ha-943381-m04" exists ...
	I1216 10:57:57.805226  233379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:57:57.805278  233379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:57:57.820681  233379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43033
	I1216 10:57:57.821236  233379 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:57:57.821760  233379 main.go:141] libmachine: Using API Version  1
	I1216 10:57:57.821783  233379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:57:57.822102  233379 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:57:57.822334  233379 main.go:141] libmachine: (ha-943381-m04) Calling .GetIP
	I1216 10:57:57.825116  233379 main.go:141] libmachine: (ha-943381-m04) DBG | domain ha-943381-m04 has defined MAC address 52:54:00:e5:24:19 in network mk-ha-943381
	I1216 10:57:57.825715  233379 main.go:141] libmachine: (ha-943381-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:24:19", ip: ""} in network mk-ha-943381: {Iface:virbr1 ExpiryTime:2024-12-16 11:55:33 +0000 UTC Type:0 Mac:52:54:00:e5:24:19 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-943381-m04 Clientid:01:52:54:00:e5:24:19}
	I1216 10:57:57.825749  233379 main.go:141] libmachine: (ha-943381-m04) DBG | domain ha-943381-m04 has defined IP address 192.168.39.170 and MAC address 52:54:00:e5:24:19 in network mk-ha-943381
	I1216 10:57:57.825920  233379 host.go:66] Checking if "ha-943381-m04" exists ...
	I1216 10:57:57.826346  233379 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 10:57:57.826396  233379 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 10:57:57.841689  233379 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I1216 10:57:57.842164  233379 main.go:141] libmachine: () Calling .GetVersion
	I1216 10:57:57.842750  233379 main.go:141] libmachine: Using API Version  1
	I1216 10:57:57.842765  233379 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 10:57:57.843039  233379 main.go:141] libmachine: () Calling .GetMachineName
	I1216 10:57:57.843250  233379 main.go:141] libmachine: (ha-943381-m04) Calling .DriverName
	I1216 10:57:57.843438  233379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 10:57:57.843458  233379 main.go:141] libmachine: (ha-943381-m04) Calling .GetSSHHostname
	I1216 10:57:57.846296  233379 main.go:141] libmachine: (ha-943381-m04) DBG | domain ha-943381-m04 has defined MAC address 52:54:00:e5:24:19 in network mk-ha-943381
	I1216 10:57:57.846849  233379 main.go:141] libmachine: (ha-943381-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:24:19", ip: ""} in network mk-ha-943381: {Iface:virbr1 ExpiryTime:2024-12-16 11:55:33 +0000 UTC Type:0 Mac:52:54:00:e5:24:19 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-943381-m04 Clientid:01:52:54:00:e5:24:19}
	I1216 10:57:57.846878  233379 main.go:141] libmachine: (ha-943381-m04) DBG | domain ha-943381-m04 has defined IP address 192.168.39.170 and MAC address 52:54:00:e5:24:19 in network mk-ha-943381
	I1216 10:57:57.846977  233379 main.go:141] libmachine: (ha-943381-m04) Calling .GetSSHPort
	I1216 10:57:57.847159  233379 main.go:141] libmachine: (ha-943381-m04) Calling .GetSSHKeyPath
	I1216 10:57:57.847297  233379 main.go:141] libmachine: (ha-943381-m04) Calling .GetSSHUsername
	I1216 10:57:57.847422  233379 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/ha-943381-m04/id_rsa Username:docker}
	I1216 10:57:57.931954  233379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 10:57:57.945815  233379 status.go:176] ha-943381-m04 status: &{Name:ha-943381-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-943381 node start m02 -v=7 --alsologtostderr: (48.200526366s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (49.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1216 10:58:48.522193  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (443.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-943381 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-943381 -v=7 --alsologtostderr
E1216 10:58:50.704921  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:00:11.588439  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:01:06.843851  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:01:34.546930  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-943381 -v=7 --alsologtostderr: (4m34.194682517s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-943381 --wait=true -v=7 --alsologtostderr
E1216 11:03:48.522063  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:06:06.844122  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-943381 --wait=true -v=7 --alsologtostderr: (2m48.705452226s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-943381
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (443.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-943381 node delete m03 -v=7 --alsologtostderr: (15.863278042s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 stop -v=7 --alsologtostderr
E1216 11:08:48.522010  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-943381 stop -v=7 --alsologtostderr: (4m32.821306718s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-943381 status -v=7 --alsologtostderr: exit status 7 (120.12545ms)

                                                
                                                
-- stdout --
	ha-943381
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-943381-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-943381-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:11:01.748630  237598 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:11:01.748782  237598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:11:01.748793  237598 out.go:358] Setting ErrFile to fd 2...
	I1216 11:11:01.748799  237598 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:11:01.749038  237598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:11:01.749239  237598 out.go:352] Setting JSON to false
	I1216 11:11:01.749274  237598 mustload.go:65] Loading cluster: ha-943381
	I1216 11:11:01.749318  237598 notify.go:220] Checking for updates...
	I1216 11:11:01.749733  237598 config.go:182] Loaded profile config "ha-943381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:11:01.749759  237598 status.go:174] checking status of ha-943381 ...
	I1216 11:11:01.750202  237598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:11:01.750271  237598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:11:01.775171  237598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36795
	I1216 11:11:01.775821  237598 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:11:01.776491  237598 main.go:141] libmachine: Using API Version  1
	I1216 11:11:01.776521  237598 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:11:01.777009  237598 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:11:01.777232  237598 main.go:141] libmachine: (ha-943381) Calling .GetState
	I1216 11:11:01.779045  237598 status.go:371] ha-943381 host status = "Stopped" (err=<nil>)
	I1216 11:11:01.779063  237598 status.go:384] host is not running, skipping remaining checks
	I1216 11:11:01.779071  237598 status.go:176] ha-943381 status: &{Name:ha-943381 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:11:01.779110  237598 status.go:174] checking status of ha-943381-m02 ...
	I1216 11:11:01.779574  237598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:11:01.779627  237598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:11:01.794691  237598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41633
	I1216 11:11:01.795151  237598 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:11:01.795643  237598 main.go:141] libmachine: Using API Version  1
	I1216 11:11:01.795664  237598 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:11:01.796003  237598 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:11:01.796163  237598 main.go:141] libmachine: (ha-943381-m02) Calling .GetState
	I1216 11:11:01.798042  237598 status.go:371] ha-943381-m02 host status = "Stopped" (err=<nil>)
	I1216 11:11:01.798059  237598 status.go:384] host is not running, skipping remaining checks
	I1216 11:11:01.798066  237598 status.go:176] ha-943381-m02 status: &{Name:ha-943381-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:11:01.798084  237598 status.go:174] checking status of ha-943381-m04 ...
	I1216 11:11:01.798384  237598 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:11:01.798423  237598 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:11:01.813908  237598 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35509
	I1216 11:11:01.814438  237598 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:11:01.815090  237598 main.go:141] libmachine: Using API Version  1
	I1216 11:11:01.815116  237598 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:11:01.815548  237598 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:11:01.815761  237598 main.go:141] libmachine: (ha-943381-m04) Calling .GetState
	I1216 11:11:01.817636  237598 status.go:371] ha-943381-m04 host status = "Stopped" (err=<nil>)
	I1216 11:11:01.817656  237598 status.go:384] host is not running, skipping remaining checks
	I1216 11:11:01.817664  237598 status.go:176] ha-943381-m04 status: &{Name:ha-943381-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (125.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-943381 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1216 11:11:06.843771  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:12:29.908412  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-943381 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m4.733538017s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (125.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-943381 --control-plane -v=7 --alsologtostderr
E1216 11:13:48.521927  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-943381 --control-plane -v=7 --alsologtostderr: (1m14.801867654s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-943381 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-690888 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-690888 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (56.120120453s)
--- PASS: TestJSONOutput/start/Command (56.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-690888 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-690888 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.68s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-690888 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-690888 --output=json --user=testUser: (6.676815127s)
--- PASS: TestJSONOutput/stop/Command (6.68s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-334238 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-334238 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.710729ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"49ccc582-9d12-4d0f-aa08-d5ed1507485b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-334238] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"31eebbfc-1318-4b46-8aee-1368fcf0d19f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20107"}}
	{"specversion":"1.0","id":"bf7687d9-e005-43e1-90aa-6c96dfd45fc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"26b392a9-0fce-4149-9fad-33ff6b4406b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig"}}
	{"specversion":"1.0","id":"c73696fc-8a41-4223-954a-2c82bcc37584","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube"}}
	{"specversion":"1.0","id":"55d5f33d-8dab-4764-b9c8-158809c84900","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6e01ec72-1d56-42f2-80e2-71ed3eba4586","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"96dafbce-0c53-457d-8041-6f59374acc20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-334238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-334238
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (88.41s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-317408 --driver=kvm2  --container-runtime=crio
E1216 11:16:06.850109  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-317408 --driver=kvm2  --container-runtime=crio: (41.862852068s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-331724 --driver=kvm2  --container-runtime=crio
E1216 11:16:51.590658  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-331724 --driver=kvm2  --container-runtime=crio: (43.94345887s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-317408
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-331724
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-331724" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-331724
helpers_test.go:175: Cleaning up "first-317408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-317408
--- PASS: TestMinikubeProfile (88.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-468547 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-468547 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.166891661s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-468547 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-468547 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-489340 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-489340 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.066942466s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.5s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489340 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489340 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.50s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-468547 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489340 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489340 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-489340
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-489340: (1.285294581s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.67s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-489340
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-489340: (20.666631595s)
--- PASS: TestMountStart/serial/RestartStopped (21.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489340 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-489340 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (107.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851147 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1216 11:18:48.522064  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851147 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m47.534416449s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (107.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-851147 -- rollout status deployment/busybox: (3.565855135s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- exec busybox-7dff88458-ct2m7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- exec busybox-7dff88458-sdj86 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- exec busybox-7dff88458-ct2m7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- exec busybox-7dff88458-sdj86 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- exec busybox-7dff88458-ct2m7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- exec busybox-7dff88458-sdj86 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- exec busybox-7dff88458-ct2m7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- exec busybox-7dff88458-ct2m7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- exec busybox-7dff88458-sdj86 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-851147 -- exec busybox-7dff88458-sdj86 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-851147 -v 3 --alsologtostderr
E1216 11:21:06.844039  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-851147 -v 3 --alsologtostderr: (51.186462387s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.76s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-851147 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp testdata/cp-test.txt multinode-851147:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp multinode-851147:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3919338259/001/cp-test_multinode-851147.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp multinode-851147:/home/docker/cp-test.txt multinode-851147-m02:/home/docker/cp-test_multinode-851147_multinode-851147-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m02 "sudo cat /home/docker/cp-test_multinode-851147_multinode-851147-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp multinode-851147:/home/docker/cp-test.txt multinode-851147-m03:/home/docker/cp-test_multinode-851147_multinode-851147-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m03 "sudo cat /home/docker/cp-test_multinode-851147_multinode-851147-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp testdata/cp-test.txt multinode-851147-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp multinode-851147-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3919338259/001/cp-test_multinode-851147-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp multinode-851147-m02:/home/docker/cp-test.txt multinode-851147:/home/docker/cp-test_multinode-851147-m02_multinode-851147.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147 "sudo cat /home/docker/cp-test_multinode-851147-m02_multinode-851147.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp multinode-851147-m02:/home/docker/cp-test.txt multinode-851147-m03:/home/docker/cp-test_multinode-851147-m02_multinode-851147-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m03 "sudo cat /home/docker/cp-test_multinode-851147-m02_multinode-851147-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp testdata/cp-test.txt multinode-851147-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp multinode-851147-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3919338259/001/cp-test_multinode-851147-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp multinode-851147-m03:/home/docker/cp-test.txt multinode-851147:/home/docker/cp-test_multinode-851147-m03_multinode-851147.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147 "sudo cat /home/docker/cp-test_multinode-851147-m03_multinode-851147.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 cp multinode-851147-m03:/home/docker/cp-test.txt multinode-851147-m02:/home/docker/cp-test_multinode-851147-m03_multinode-851147-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 ssh -n multinode-851147-m02 "sudo cat /home/docker/cp-test_multinode-851147-m03_multinode-851147-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-851147 node stop m03: (1.452604384s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-851147 status: exit status 7 (440.602815ms)

                                                
                                                
-- stdout --
	multinode-851147
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-851147-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-851147-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-851147 status --alsologtostderr: exit status 7 (436.543793ms)

                                                
                                                
-- stdout --
	multinode-851147
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-851147-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-851147-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:21:18.820220  245759 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:21:18.820357  245759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:21:18.820366  245759 out.go:358] Setting ErrFile to fd 2...
	I1216 11:21:18.820371  245759 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:21:18.820560  245759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:21:18.820723  245759 out.go:352] Setting JSON to false
	I1216 11:21:18.820764  245759 mustload.go:65] Loading cluster: multinode-851147
	I1216 11:21:18.820870  245759 notify.go:220] Checking for updates...
	I1216 11:21:18.821275  245759 config.go:182] Loaded profile config "multinode-851147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:21:18.821301  245759 status.go:174] checking status of multinode-851147 ...
	I1216 11:21:18.821835  245759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:21:18.821906  245759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:21:18.843946  245759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44899
	I1216 11:21:18.844515  245759 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:21:18.845363  245759 main.go:141] libmachine: Using API Version  1
	I1216 11:21:18.845411  245759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:21:18.845790  245759 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:21:18.846022  245759 main.go:141] libmachine: (multinode-851147) Calling .GetState
	I1216 11:21:18.847752  245759 status.go:371] multinode-851147 host status = "Running" (err=<nil>)
	I1216 11:21:18.847771  245759 host.go:66] Checking if "multinode-851147" exists ...
	I1216 11:21:18.848081  245759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:21:18.848131  245759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:21:18.864070  245759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33799
	I1216 11:21:18.864548  245759 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:21:18.865152  245759 main.go:141] libmachine: Using API Version  1
	I1216 11:21:18.865188  245759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:21:18.865547  245759 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:21:18.865728  245759 main.go:141] libmachine: (multinode-851147) Calling .GetIP
	I1216 11:21:18.868560  245759 main.go:141] libmachine: (multinode-851147) DBG | domain multinode-851147 has defined MAC address 52:54:00:88:03:16 in network mk-multinode-851147
	I1216 11:21:18.869016  245759 main.go:141] libmachine: (multinode-851147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:03:16", ip: ""} in network mk-multinode-851147: {Iface:virbr1 ExpiryTime:2024-12-16 12:18:37 +0000 UTC Type:0 Mac:52:54:00:88:03:16 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:multinode-851147 Clientid:01:52:54:00:88:03:16}
	I1216 11:21:18.869041  245759 main.go:141] libmachine: (multinode-851147) DBG | domain multinode-851147 has defined IP address 192.168.39.143 and MAC address 52:54:00:88:03:16 in network mk-multinode-851147
	I1216 11:21:18.869238  245759 host.go:66] Checking if "multinode-851147" exists ...
	I1216 11:21:18.869691  245759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:21:18.869751  245759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:21:18.886597  245759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I1216 11:21:18.887084  245759 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:21:18.887567  245759 main.go:141] libmachine: Using API Version  1
	I1216 11:21:18.887593  245759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:21:18.888009  245759 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:21:18.888202  245759 main.go:141] libmachine: (multinode-851147) Calling .DriverName
	I1216 11:21:18.888390  245759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 11:21:18.888413  245759 main.go:141] libmachine: (multinode-851147) Calling .GetSSHHostname
	I1216 11:21:18.891664  245759 main.go:141] libmachine: (multinode-851147) DBG | domain multinode-851147 has defined MAC address 52:54:00:88:03:16 in network mk-multinode-851147
	I1216 11:21:18.892092  245759 main.go:141] libmachine: (multinode-851147) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:88:03:16", ip: ""} in network mk-multinode-851147: {Iface:virbr1 ExpiryTime:2024-12-16 12:18:37 +0000 UTC Type:0 Mac:52:54:00:88:03:16 Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:multinode-851147 Clientid:01:52:54:00:88:03:16}
	I1216 11:21:18.892126  245759 main.go:141] libmachine: (multinode-851147) DBG | domain multinode-851147 has defined IP address 192.168.39.143 and MAC address 52:54:00:88:03:16 in network mk-multinode-851147
	I1216 11:21:18.892300  245759 main.go:141] libmachine: (multinode-851147) Calling .GetSSHPort
	I1216 11:21:18.892470  245759 main.go:141] libmachine: (multinode-851147) Calling .GetSSHKeyPath
	I1216 11:21:18.892612  245759 main.go:141] libmachine: (multinode-851147) Calling .GetSSHUsername
	I1216 11:21:18.892728  245759 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/multinode-851147/id_rsa Username:docker}
	I1216 11:21:18.973037  245759 ssh_runner.go:195] Run: systemctl --version
	I1216 11:21:18.979239  245759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:21:18.993231  245759 kubeconfig.go:125] found "multinode-851147" server: "https://192.168.39.143:8443"
	I1216 11:21:18.993279  245759 api_server.go:166] Checking apiserver status ...
	I1216 11:21:18.993313  245759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:21:19.006692  245759 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1109/cgroup
	W1216 11:21:19.016174  245759 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1109/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 11:21:19.016228  245759 ssh_runner.go:195] Run: ls
	I1216 11:21:19.020155  245759 api_server.go:253] Checking apiserver healthz at https://192.168.39.143:8443/healthz ...
	I1216 11:21:19.024430  245759 api_server.go:279] https://192.168.39.143:8443/healthz returned 200:
	ok
	I1216 11:21:19.024453  245759 status.go:463] multinode-851147 apiserver status = Running (err=<nil>)
	I1216 11:21:19.024463  245759 status.go:176] multinode-851147 status: &{Name:multinode-851147 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:21:19.024483  245759 status.go:174] checking status of multinode-851147-m02 ...
	I1216 11:21:19.024796  245759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:21:19.024827  245759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:21:19.040400  245759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I1216 11:21:19.040823  245759 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:21:19.041441  245759 main.go:141] libmachine: Using API Version  1
	I1216 11:21:19.041465  245759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:21:19.041793  245759 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:21:19.041978  245759 main.go:141] libmachine: (multinode-851147-m02) Calling .GetState
	I1216 11:21:19.043445  245759 status.go:371] multinode-851147-m02 host status = "Running" (err=<nil>)
	I1216 11:21:19.043462  245759 host.go:66] Checking if "multinode-851147-m02" exists ...
	I1216 11:21:19.043779  245759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:21:19.043812  245759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:21:19.059568  245759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I1216 11:21:19.060034  245759 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:21:19.060550  245759 main.go:141] libmachine: Using API Version  1
	I1216 11:21:19.060574  245759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:21:19.060920  245759 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:21:19.061158  245759 main.go:141] libmachine: (multinode-851147-m02) Calling .GetIP
	I1216 11:21:19.064240  245759 main.go:141] libmachine: (multinode-851147-m02) DBG | domain multinode-851147-m02 has defined MAC address 52:54:00:c9:45:45 in network mk-multinode-851147
	I1216 11:21:19.064738  245759 main.go:141] libmachine: (multinode-851147-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:45:45", ip: ""} in network mk-multinode-851147: {Iface:virbr1 ExpiryTime:2024-12-16 12:19:38 +0000 UTC Type:0 Mac:52:54:00:c9:45:45 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-851147-m02 Clientid:01:52:54:00:c9:45:45}
	I1216 11:21:19.064771  245759 main.go:141] libmachine: (multinode-851147-m02) DBG | domain multinode-851147-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:c9:45:45 in network mk-multinode-851147
	I1216 11:21:19.064966  245759 host.go:66] Checking if "multinode-851147-m02" exists ...
	I1216 11:21:19.065271  245759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:21:19.065297  245759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:21:19.080171  245759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33549
	I1216 11:21:19.081078  245759 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:21:19.082610  245759 main.go:141] libmachine: Using API Version  1
	I1216 11:21:19.082637  245759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:21:19.083007  245759 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:21:19.083255  245759 main.go:141] libmachine: (multinode-851147-m02) Calling .DriverName
	I1216 11:21:19.083452  245759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 11:21:19.083477  245759 main.go:141] libmachine: (multinode-851147-m02) Calling .GetSSHHostname
	I1216 11:21:19.086239  245759 main.go:141] libmachine: (multinode-851147-m02) DBG | domain multinode-851147-m02 has defined MAC address 52:54:00:c9:45:45 in network mk-multinode-851147
	I1216 11:21:19.086792  245759 main.go:141] libmachine: (multinode-851147-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:45:45", ip: ""} in network mk-multinode-851147: {Iface:virbr1 ExpiryTime:2024-12-16 12:19:38 +0000 UTC Type:0 Mac:52:54:00:c9:45:45 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:multinode-851147-m02 Clientid:01:52:54:00:c9:45:45}
	I1216 11:21:19.086829  245759 main.go:141] libmachine: (multinode-851147-m02) DBG | domain multinode-851147-m02 has defined IP address 192.168.39.49 and MAC address 52:54:00:c9:45:45 in network mk-multinode-851147
	I1216 11:21:19.086969  245759 main.go:141] libmachine: (multinode-851147-m02) Calling .GetSSHPort
	I1216 11:21:19.087139  245759 main.go:141] libmachine: (multinode-851147-m02) Calling .GetSSHKeyPath
	I1216 11:21:19.087301  245759 main.go:141] libmachine: (multinode-851147-m02) Calling .GetSSHUsername
	I1216 11:21:19.087438  245759 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20107-210204/.minikube/machines/multinode-851147-m02/id_rsa Username:docker}
	I1216 11:21:19.171791  245759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:21:19.185084  245759 status.go:176] multinode-851147-m02 status: &{Name:multinode-851147-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:21:19.185122  245759 status.go:174] checking status of multinode-851147-m03 ...
	I1216 11:21:19.185485  245759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:21:19.185538  245759 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:21:19.201752  245759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45909
	I1216 11:21:19.202346  245759 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:21:19.202866  245759 main.go:141] libmachine: Using API Version  1
	I1216 11:21:19.202890  245759 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:21:19.203224  245759 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:21:19.203460  245759 main.go:141] libmachine: (multinode-851147-m03) Calling .GetState
	I1216 11:21:19.205291  245759 status.go:371] multinode-851147-m03 host status = "Stopped" (err=<nil>)
	I1216 11:21:19.205309  245759 status.go:384] host is not running, skipping remaining checks
	I1216 11:21:19.205317  245759 status.go:176] multinode-851147-m03 status: &{Name:multinode-851147-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-851147 node start m03 -v=7 --alsologtostderr: (36.730420618s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (340.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-851147
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-851147
E1216 11:23:48.522719  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-851147: (3m3.252617165s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851147 --wait=true -v=8 --alsologtostderr
E1216 11:26:06.844403  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851147 --wait=true -v=8 --alsologtostderr: (2m37.084666349s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-851147
--- PASS: TestMultiNode/serial/RestartKeepsNodes (340.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-851147 node delete m03: (1.66841739s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 stop
E1216 11:28:48.522476  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:29:09.910369  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-851147 stop: (3m1.706625142s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-851147 status: exit status 7 (97.945859ms)

                                                
                                                
-- stdout --
	multinode-851147
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-851147-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-851147 status --alsologtostderr: exit status 7 (99.058676ms)

                                                
                                                
-- stdout --
	multinode-851147
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-851147-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:30:41.070903  248747 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:30:41.071035  248747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:30:41.071045  248747 out.go:358] Setting ErrFile to fd 2...
	I1216 11:30:41.071049  248747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:30:41.071292  248747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:30:41.071482  248747 out.go:352] Setting JSON to false
	I1216 11:30:41.071512  248747 mustload.go:65] Loading cluster: multinode-851147
	I1216 11:30:41.071636  248747 notify.go:220] Checking for updates...
	I1216 11:30:41.072104  248747 config.go:182] Loaded profile config "multinode-851147": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:30:41.072136  248747 status.go:174] checking status of multinode-851147 ...
	I1216 11:30:41.072747  248747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:30:41.072825  248747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:30:41.094072  248747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I1216 11:30:41.094669  248747 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:30:41.095246  248747 main.go:141] libmachine: Using API Version  1
	I1216 11:30:41.095270  248747 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:30:41.095715  248747 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:30:41.095971  248747 main.go:141] libmachine: (multinode-851147) Calling .GetState
	I1216 11:30:41.097482  248747 status.go:371] multinode-851147 host status = "Stopped" (err=<nil>)
	I1216 11:30:41.097498  248747 status.go:384] host is not running, skipping remaining checks
	I1216 11:30:41.097504  248747 status.go:176] multinode-851147 status: &{Name:multinode-851147 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:30:41.097560  248747 status.go:174] checking status of multinode-851147-m02 ...
	I1216 11:30:41.097873  248747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1216 11:30:41.097917  248747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1216 11:30:41.113583  248747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42233
	I1216 11:30:41.114077  248747 main.go:141] libmachine: () Calling .GetVersion
	I1216 11:30:41.114673  248747 main.go:141] libmachine: Using API Version  1
	I1216 11:30:41.114708  248747 main.go:141] libmachine: () Calling .SetConfigRaw
	I1216 11:30:41.115146  248747 main.go:141] libmachine: () Calling .GetMachineName
	I1216 11:30:41.115361  248747 main.go:141] libmachine: (multinode-851147-m02) Calling .GetState
	I1216 11:30:41.117323  248747 status.go:371] multinode-851147-m02 host status = "Stopped" (err=<nil>)
	I1216 11:30:41.117341  248747 status.go:384] host is not running, skipping remaining checks
	I1216 11:30:41.117347  248747 status.go:176] multinode-851147-m02 status: &{Name:multinode-851147-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (114.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851147 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1216 11:31:06.844262  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851147 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.787700266s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-851147 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (114.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-851147
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851147-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-851147-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (69.495039ms)

                                                
                                                
-- stdout --
	* [multinode-851147-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-851147-m02' is duplicated with machine name 'multinode-851147-m02' in profile 'multinode-851147'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-851147-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-851147-m03 --driver=kvm2  --container-runtime=crio: (43.952760929s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-851147
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-851147: exit status 80 (224.870395ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-851147 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-851147-m03 already exists in multinode-851147-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-851147-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.10s)

                                                
                                    
x
+
TestScheduledStopUnix (110.51s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-375406 --memory=2048 --driver=kvm2  --container-runtime=crio
E1216 11:36:06.844013  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-375406 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.881681434s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375406 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-375406 -n scheduled-stop-375406
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375406 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1216 11:36:45.093730  217519 retry.go:31] will retry after 120.262µs: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.094860  217519 retry.go:31] will retry after 166.612µs: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.095996  217519 retry.go:31] will retry after 172.204µs: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.097139  217519 retry.go:31] will retry after 290.233µs: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.098269  217519 retry.go:31] will retry after 653.193µs: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.099401  217519 retry.go:31] will retry after 929.558µs: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.100525  217519 retry.go:31] will retry after 1.675267ms: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.102736  217519 retry.go:31] will retry after 1.660566ms: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.104910  217519 retry.go:31] will retry after 1.533262ms: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.107113  217519 retry.go:31] will retry after 5.371813ms: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.113340  217519 retry.go:31] will retry after 6.977264ms: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.120556  217519 retry.go:31] will retry after 7.522173ms: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.128853  217519 retry.go:31] will retry after 14.74167ms: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.144089  217519 retry.go:31] will retry after 25.004279ms: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
I1216 11:36:45.169340  217519 retry.go:31] will retry after 33.944988ms: open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/scheduled-stop-375406/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375406 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-375406 -n scheduled-stop-375406
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-375406
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-375406 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-375406
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-375406: exit status 7 (72.30596ms)

                                                
                                                
-- stdout --
	scheduled-stop-375406
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-375406 -n scheduled-stop-375406
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-375406 -n scheduled-stop-375406: exit status 7 (67.059538ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-375406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-375406
--- PASS: TestScheduledStopUnix (110.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (200.81s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3752900241 start -p running-upgrade-446525 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3752900241 start -p running-upgrade-446525 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m25.569238744s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-446525 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-446525 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m53.698308015s)
helpers_test.go:175: Cleaning up "running-upgrade-446525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-446525
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-446525: (1.05069155s)
--- PASS: TestRunningBinaryUpgrade (200.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911686 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-911686 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (99.639017ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-911686] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (122.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911686 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911686 --driver=kvm2  --container-runtime=crio: (2m2.319082693s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-911686 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (122.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-560939 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-560939 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (117.66821ms)

                                                
                                                
-- stdout --
	* [false-560939] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:39:54.511569  254589 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:39:54.511696  254589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:39:54.511706  254589 out.go:358] Setting ErrFile to fd 2...
	I1216 11:39:54.511711  254589 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:39:54.511893  254589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-210204/.minikube/bin
	I1216 11:39:54.512516  254589 out.go:352] Setting JSON to false
	I1216 11:39:54.513606  254589 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12142,"bootTime":1734337053,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 11:39:54.513716  254589 start.go:139] virtualization: kvm guest
	I1216 11:39:54.515873  254589 out.go:177] * [false-560939] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1216 11:39:54.517502  254589 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:39:54.517505  254589 notify.go:220] Checking for updates...
	I1216 11:39:54.518954  254589 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:39:54.520254  254589 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-210204/kubeconfig
	I1216 11:39:54.521607  254589 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-210204/.minikube
	I1216 11:39:54.522935  254589 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 11:39:54.524159  254589 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:39:54.525865  254589 config.go:182] Loaded profile config "NoKubernetes-911686": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:39:54.525961  254589 config.go:182] Loaded profile config "cert-expiration-002454": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:39:54.526038  254589 config.go:182] Loaded profile config "cert-options-533676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:39:54.526132  254589 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:39:54.563880  254589 out.go:177] * Using the kvm2 driver based on user configuration
	I1216 11:39:54.565286  254589 start.go:297] selected driver: kvm2
	I1216 11:39:54.565312  254589 start.go:901] validating driver "kvm2" against <nil>
	I1216 11:39:54.565330  254589 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:39:54.567763  254589 out.go:201] 
	W1216 11:39:54.569188  254589 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1216 11:39:54.570570  254589 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-560939 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-560939" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-560939" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-560939

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-560939"

                                                
                                                
----------------------- debugLogs end: false-560939 [took: 3.029686457s] --------------------------------
helpers_test.go:175: Cleaning up "false-560939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-560939
--- PASS: TestNetworkPlugins/group/false (3.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (47.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911686 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911686 --no-kubernetes --driver=kvm2  --container-runtime=crio: (46.539983781s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-911686 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-911686 status -o json: exit status 2 (274.070465ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-911686","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-911686
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-911686: (1.053845306s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (47.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (41.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911686 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911686 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.547990196s)
--- PASS: TestNoKubernetes/serial/Start (41.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-911686 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-911686 "sudo systemctl is-active --quiet service kubelet": exit status 1 (225.162068ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-911686
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-911686: (1.302365461s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-911686 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-911686 --driver=kvm2  --container-runtime=crio: (43.669437981s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-911686 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-911686 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.931288ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (93.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3940554025 start -p stopped-upgrade-742971 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3940554025 start -p stopped-upgrade-742971 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (47.767463792s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3940554025 -p stopped-upgrade-742971 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3940554025 -p stopped-upgrade-742971 stop: (1.455751204s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-742971 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-742971 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.126527426s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (93.35s)

                                                
                                    
x
+
TestPause/serial/Start (65.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-445906 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E1216 11:43:48.521798  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-445906 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m5.037623384s)
--- PASS: TestPause/serial/Start (65.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-742971
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (59.353696877s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m13.313375753s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-445906 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-445906 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.40181083s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-560939 "pgrep -a kubelet"
I1216 11:44:51.996561  217519 config.go:182] Loaded profile config "auto-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-560939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hd84w" [2ac78126-090f-4877-8206-68e9e53ad266] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hd84w" [2ac78126-090f-4877-8206-68e9e53ad266] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004428993s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-560939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m14.704280217s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.70s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-445906 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-445906 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-445906 --output=json --layout=cluster: exit status 2 (264.901817ms)

                                                
                                                
-- stdout --
	{"Name":"pause-445906","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-445906","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-445906 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.89s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-445906 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.89s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-445906 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-445906 --alsologtostderr -v=5: (1.115807602s)
--- PASS: TestPause/serial/DeletePaused (1.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xdxr9" [0090cc8e-1ea7-4612-8978-73be403e0eee] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004121614s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (88.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m28.364660552s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (88.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-560939 "pgrep -a kubelet"
I1216 11:45:33.179697  217519 config.go:182] Loaded profile config "kindnet-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-560939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8p4k5" [0456d742-4495-46d3-b2ea-f1fa2a979794] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8p4k5" [0456d742-4495-46d3-b2ea-f1fa2a979794] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.008007158s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-560939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m9.976959709s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-84vd4" [7ca5cfe7-cd53-4254-b817-a1b1b65c3585] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005545725s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-560939 "pgrep -a kubelet"
I1216 11:46:41.953285  217519 config.go:182] Loaded profile config "calico-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-560939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gtpwg" [2796b5ef-e080-4cc8-ad84-c58822d90458] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gtpwg" [2796b5ef-e080-4cc8-ad84-c58822d90458] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.004322398s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-560939 "pgrep -a kubelet"
I1216 11:46:55.559499  217519 config.go:182] Loaded profile config "custom-flannel-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-560939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5bsfz" [a0a3b299-e694-4b19-b24d-a62558c2294f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5bsfz" [a0a3b299-e694-4b19-b24d-a62558c2294f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.00486161s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-560939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-560939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-560939 "pgrep -a kubelet"
I1216 11:47:11.907404  217519 config.go:182] Loaded profile config "enable-default-cni-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-560939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q2pd9" [b9eb5113-9461-4fb6-987b-c3f3b03fa207] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-q2pd9" [b9eb5113-9461-4fb6-987b-c3f3b03fa207] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004771481s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m14.61226702s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-560939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-560939 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m24.063230593s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (106.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-181484 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-181484 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m46.764101556s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (106.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5wrlc" [4c080475-93c7-4763-818f-4e6c90b55a38] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005063426s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-560939 "pgrep -a kubelet"
I1216 11:48:36.180157  217519 config.go:182] Loaded profile config "flannel-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-560939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8xrp2" [f730b5cb-f4d0-4b0b-935e-3ed99d383404] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8xrp2" [f730b5cb-f4d0-4b0b-935e-3ed99d383404] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.006452424s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-560939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-560939 "pgrep -a kubelet"
I1216 11:48:49.445315  217519 config.go:182] Loaded profile config "bridge-560939": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-560939 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-560939 replace --force -f testdata/netcat-deployment.yaml: (1.374932806s)
I1216 11:48:50.837047  217519 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-q78fg" [59fe660f-c2b4-4895-ada4-dfad5cff07bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-q78fg" [59fe660f-c2b4-4895-ada4-dfad5cff07bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003549886s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-560939 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-560939 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-987169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-987169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m3.443552679s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-935544 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 11:49:52.224085  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:49:52.230509  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:49:52.241990  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:49:52.263484  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:49:52.304940  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:49:52.386462  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:49:52.548076  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:49:52.870179  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:49:53.512292  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:49:54.794557  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-935544 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m18.43006089s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-181484 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5a31728c-5503-4058-ae26-a390394fa977] Pending
helpers_test.go:344: "busybox" [5a31728c-5503-4058-ae26-a390394fa977] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1216 11:49:57.356574  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5a31728c-5503-4058-ae26-a390394fa977] Running
E1216 11:50:02.478293  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004005292s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-181484 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-181484 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-181484 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.138418053s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-181484 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-181484 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-181484 --alsologtostderr -v=3: (1m31.066450869s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-987169 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ab47a42-8731-4af5-9a7d-8ed0fb59cafc] Pending
helpers_test.go:344: "busybox" [2ab47a42-8731-4af5-9a7d-8ed0fb59cafc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1216 11:50:11.593577  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/addons-020871/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:12.720349  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [2ab47a42-8731-4af5-9a7d-8ed0fb59cafc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.003889731s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-987169 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-987169 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-987169 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-987169 --alsologtostderr -v=3
E1216 11:50:26.955313  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:26.961690  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:26.973126  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:26.994427  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:27.035923  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:27.117438  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:27.278954  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:27.600335  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:28.242630  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:29.524564  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:32.086861  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:33.201720  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:50:37.209094  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-987169 --alsologtostderr -v=3: (1m31.015169428s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-935544 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [850af829-40c9-441c-a6ee-975fa99c5d9e] Pending
helpers_test.go:344: "busybox" [850af829-40c9-441c-a6ee-975fa99c5d9e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [850af829-40c9-441c-a6ee-975fa99c5d9e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004159676s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-935544 exec busybox -- /bin/sh -c "ulimit -n"
E1216 11:50:47.450775  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-935544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-935544 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-935544 --alsologtostderr -v=3
E1216 11:51:06.843888  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/functional-365716/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:07.932980  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:14.163184  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:35.653816  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:35.660286  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:35.671738  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:35.693271  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:35.734769  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:35.816292  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:35.978274  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:36.300144  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:36.941460  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:38.223304  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-935544 --alsologtostderr -v=3: (1m31.317467982s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-181484 -n no-preload-181484
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-181484 -n no-preload-181484: exit status 7 (77.458241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-181484 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (350.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-181484 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 11:51:40.785573  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:45.907218  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:48.894862  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/kindnet-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-181484 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (5m50.359561062s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-181484 -n no-preload-181484
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (350.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-987169 -n embed-certs-987169
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-987169 -n embed-certs-987169: exit status 7 (72.372978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-987169 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (303.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-987169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 11:51:55.823655  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:55.830044  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:55.841391  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:55.862778  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:55.904191  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:55.985686  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:56.147597  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:56.148746  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:56.469525  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:57.110925  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:51:58.392441  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:00.953812  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:06.075309  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:12.186465  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:12.192881  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:12.204874  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:12.226444  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:12.267937  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:12.349402  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:12.511044  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:12.832355  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:13.474095  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:14.756233  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:16.317222  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:16.630039  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:17.317507  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-987169 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (5m3.483824717s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-987169 -n embed-certs-987169
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (303.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-935544 -n default-k8s-diff-port-935544
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-935544 -n default-k8s-diff-port-935544: exit status 7 (77.739085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-935544 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (301.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-935544 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 11:52:22.439054  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:32.681287  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:36.085121  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/auto-560939/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:52:36.799231  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-935544 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (5m1.239967866s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-935544 -n default-k8s-diff-port-935544
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (301.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-933974 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-933974 --alsologtostderr -v=3: (1.340628891s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-933974 -n old-k8s-version-933974: exit status 7 (70.177379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-933974 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bvrpr" [80cbef65-21b2-4874-afd3-9250a76b9a69] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003749488s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bvrpr" [80cbef65-21b2-4874-afd3-9250a76b9a69] Running
E1216 11:57:03.355389  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/calico-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004096879s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-987169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-987169 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-987169 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-987169 -n embed-certs-987169
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-987169 -n embed-certs-987169: exit status 2 (288.392379ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-987169 -n embed-certs-987169
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-987169 -n embed-certs-987169: exit status 2 (326.442083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-987169 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-987169 -n embed-certs-987169
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-987169 -n embed-certs-987169
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-409154 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-409154 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (46.162810799s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-n9bgd" [4a77323f-9320-4e3b-9455-82f622682bec] Running
E1216 11:57:23.524611  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/custom-flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005177565s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-n9bgd" [4a77323f-9320-4e3b-9455-82f622682bec] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004874733s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-935544 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8dqn9" [c70ac022-a1a3-459e-8673-34d32bd6de7d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8dqn9" [c70ac022-a1a3-459e-8673-34d32bd6de7d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.005208798s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-935544 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-935544 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-935544 -n default-k8s-diff-port-935544
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-935544 -n default-k8s-diff-port-935544: exit status 2 (250.867ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-935544 -n default-k8s-diff-port-935544
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-935544 -n default-k8s-diff-port-935544: exit status 2 (256.470751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-935544 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-935544 -n default-k8s-diff-port-935544
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-935544 -n default-k8s-diff-port-935544
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8dqn9" [c70ac022-a1a3-459e-8673-34d32bd6de7d] Running
E1216 11:57:39.888339  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/enable-default-cni-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004479141s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-181484 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-181484 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-181484 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-181484 --alsologtostderr -v=1: (1.601020147s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-181484 -n no-preload-181484
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-181484 -n no-preload-181484: exit status 2 (280.438358ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-181484 -n no-preload-181484
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-181484 -n no-preload-181484: exit status 2 (242.07414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-181484 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-181484 -n no-preload-181484
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-181484 -n no-preload-181484
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-409154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-409154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.050683637s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-409154 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-409154 --alsologtostderr -v=3: (10.314259489s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-409154 -n newest-cni-409154
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-409154 -n newest-cni-409154: exit status 7 (67.691461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-409154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-409154 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 11:58:29.962933  217519 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/flannel-560939/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-409154 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (34.658845437s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-409154 -n newest-cni-409154
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-409154 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-409154 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-409154 -n newest-cni-409154
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-409154 -n newest-cni-409154: exit status 2 (236.215773ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-409154 -n newest-cni-409154
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-409154 -n newest-cni-409154: exit status 2 (237.609993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-409154 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-409154 -n newest-cni-409154
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-409154 -n newest-cni-409154
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    

Test skip (34/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-020871 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-560939 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-560939" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-560939" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-560939

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-560939"

                                                
                                                
----------------------- debugLogs end: kubenet-560939 [took: 3.116861768s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-560939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-560939
--- SKIP: TestNetworkPlugins/group/kubenet (3.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-560939 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-560939" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20107-210204/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Dec 2024 11:39:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.61.125:8443
name: NoKubernetes-911686
contexts:
- context:
cluster: NoKubernetes-911686
extensions:
- extension:
last-update: Mon, 16 Dec 2024 11:39:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-911686
name: NoKubernetes-911686
current-context: NoKubernetes-911686
kind: Config
preferences: {}
users:
- name: NoKubernetes-911686
user:
client-certificate: /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/NoKubernetes-911686/client.crt
client-key: /home/jenkins/minikube-integration/20107-210204/.minikube/profiles/NoKubernetes-911686/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-560939

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-560939" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-560939"

                                                
                                                
----------------------- debugLogs end: cilium-560939 [took: 3.331362414s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-560939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-560939
--- SKIP: TestNetworkPlugins/group/cilium (3.49s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-126103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-126103
--- SKIP: TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                    
Copied to clipboard